• Home
  • Blog
  • Media
  • Contact Me
  • Newsletter
  • Bowlings Abroad
  • Nerd Farmer Podcast
  • Teaching Civil Liberties
  • Supporting Undocumented Students
Menu

Nate Bowling: American Teacher Abroad

Street Address
City, State, Zip
Phone Number

Your Custom Text Here

Nate Bowling: American Teacher Abroad

  • Home
  • Blog
  • Media
  • Contact Me
  • Newsletter
  • Bowlings Abroad
  • Nerd Farmer Podcast
  • Teaching Civil Liberties
  • Supporting Undocumented Students

The Feds Were Asleep at the Switch for Crypto, AI Could Be Much Worse

February 26, 2023 Nathan Bowling

Police in Dallas used a robot, armed with a brick of C4 to kill a barricaded shooter ending a standoff in 2016—the department later fought the release of records about the decision making process leading up to the killing

An annoying part about getting older is that you realize how cyclical things are and that much of life is the same hustles and hassles with new names and labels. Whatever your thoughts about cryptocurrency (I'm a skeptic but also own a small amount), it's indisputable the sector is rife with scams, rugpulls, and unregulated securities. As a result, billions were lost in crypto hacks and scams over the last two years (see table below). This isn’t counting the firms like FTX, Celsius, and Voyager that went under, wiping out an additional 200 billion dollars from retail investors and people who trusted the shadow banks called crypto exchanges.

The Securities and Exchange Commission (SEC) finally woke up from its multi-decade slumber this month and issued massive fines against several crypto exchanges and the influencers shilling for them. Retired NBA star Paul Pierce was fined 1.4 million dollars for promoting a token called Ethereum Max. EMax was a pump & dump scheme where new investors were served up as exit-liquidity for the founders. EMax currently trades at $0.00000000099 per worthless token. Kim Kardashian was fined a similar amount for touting the same token in December. Regulators have stablecoins in their sights. Crypto exchange Kraken agreed to a 30 million dollar fine and agreed to shut down their staking program (if you don’t know what crypto staking is, you’re probably better off that way). Kraken’s rival Binance is expecting a massive fine as well.

2022 was a record year for hackers in crypto - Source: Decrypt, data from Chainalysis

The handwriting was on the wall for years about crypto scams but federal authorities waited over a decade before acting. In the interim millions of people were harmed. A regulatory framework for crypto exists and has existed since before the first token came to market. We can't afford to wait fourteen or even four years for the government to set the ground rules for AI. The potential for society wide harm is incalculably larger. 

After-the-fact debates are biased toward the expansion, rather than limitation, of a practice. In 2016, the city of Dallas had to reckon with the question of whether they would allow police to kill a barricaded shooter with a remote operated drone. They decided "yes" this is an acceptable use of force. Unfortunately, they decided it retroactively, months after the police had killed their target (with a Remotec Androx Mark V A-1, manufactured by Northrup Grumman). The police department refused to release documents related to the decision to use the robot and the city absolved the chief and the entire chain of command after the incident. The police chief, David Brown, now runs the police department in Chicago and there’s now a precedent regarding the use remote operated robots to kill people.

As venture capital abandons crypto projects, opting to fund cowboy AI projects, policymakers can't be passive. The potential harm from AI in journalism, financial markets, deep-fake aided scams, and law enforcement use of force have the potential to do far more damage than the crypto bros. Letting the DARPA and Boston Dynamic chips (see video) fall where they may is societal malpractice. If my concerns here seem alarmist to you, imagine trying to explain in-flight wifi to someone in 1994.

We can't wait to make the decision about the limits we will put on the use of AI until after they’re deployed. If you don't think there's people in law enforcement salivating to deploy AI robots and drones in low-income and Black neighborhoods, you don't know American history.

In Politics, Society Tags AI, Civil Liberties

Another Take on that Silly Chat Bot (or Death to the High School Essay)

January 22, 2023 Nathan Bowling
A creepy image of a robot plating a keyboard.

I’m not opposed to new technology. Hell, I love a new gadget as much as the next guy. But we need to pump the brakes and have some collective conversations about the mainstreaming of AI.

I hold heterodox views on teaching writing. In brief, the way we teach writing in K-12 largely prepares students for further academia and not actual life. For example, I don’t use MLA (or APA) citations in my everyday life; you likely don’t either. I haven’t used them since grad school. When I want to cite a source, I use a hyperlink, occasionally a footnote.

When it comes to writing tasks, many teachers are obsessed with page and word counts. I try me best to avoid them. When I assign a task to a student, I tell them how many ideas or arguments they need to present, not how much to write. If a student can make a coherent argument for the abolition of the filibuster in the Senate, using two arguments, some evidence, and a counterclaim in 800 words, great! Oh, you need 1800 words? Fine. In the end, I care more about the ideas my students are interrogating than about the volume of writing they produce.

Writing instruction should ideally center on real-life use cases. They need opportunities to play with complex ideas rather than writing fewer, longer high-stakes pieces. I’ve been paid to write. Definitionally, I am a professional writer (don’t laugh). In all the occasions I have been paid to write something, it’s never been much more than 1000 words. If that’s good enough for Slate, it should be good enough for an IB/AP/A-Level comp class.

Now before you start erecting guillotines… Yes, students need to write and revise more often. Yes, they need to be taught to write for specific purposes and contexts (a wedding toast, a resume, a cover letter). But if a high school student can craft a coherent, thoughtful 1000 word essay, they’re in good shape. More isn’t better, it’s just more.

Lastly, almost all the writing my students do in class is hand-written, on-demand. I give them a prompt and some stimulus (a map, a data set, a passage from a primary source) and they go to town for the period. But in each class I teach, there’s usually also one longer, more formal essay each year where students are required to demonstrate more traditional essay skills. I really don’t enjoy reading or grading them, but I understand the exercise has some value.

The preceding was my philosophy on writing until last week when that silly chat bot rolled into room 157.

Real talk, I am never assigning another out of class essay as long as I live. Ain’t no way in hell I’m gonna throw away my evenings and Sunday afternoons trying to figure out if the essay I am reading is Charlie’s or a chatbot. Nope, nein, nada—that is for suckers. I ain’t no sucka.

But it’s bigger than that. The emergence of AI created content onto the mainstream of our society with essentially no public debate or government regulation is incredibly problematic. Even worse, Open AI, the creator of ChatGPT (this is the only time I will use the name of the bot in question because every time you mention them you’re advertising for them), was co-founded by the problematic richest man on the planet, Elon Musk. Even worse squared, another co-founder, Sam Altman, was behind a massive crypto scam, Worldcoin. It promised to provide a form of UBI by collecting iris scans from half a million people in developing states in exchange for a crypto token that now trades at $0.02221. I am not making this up—this is possibly the worst idea ever, carried out by the worst people possible.

To be clear:

I want nothing to do with it.

Burn it with fire.

Let it fall forever in the Mines of Moria with the Balrog that killed Gandalf.

If you think I am being extreme here, that’s okay. Most people I talk to about this topic say the same. I got called a luddite for this take on my own podcast Friday night.

Here’s the thing. Philosophically, when I am presented with a moral question, I assume the “most likely, worst case scenario” and work backwards in crafting my personal response and preferred public policy outcome. For example, should we arm teachers? Well, do you want a racist Karen teacher that “fears for her life” shooting a Black middle schooler? No? Me either. So that’s a rubbish idea. Next, do we want the coverage of the upcoming election to be a torrent of partisan AI crafted propaganda and foreign-funded AI disinformation? If your answer is no (and unless you’re a psycho or a libertarian tecno-triumphalist, the answer should be no), we have to ask ourselves how do we prevent this dystopian hellscape scenario from taking place?

That’s where my conversations about mainstreaming AI start. Some of these pieces coming out from teachers about how they plan to integrate the bots in their practice are the most naive non-sense I’ve read in my whole life. Obviously, AI and machine learning are coming and have a place in our future. But do we have to let some of the worst people on the planet implement it with literally no regulatory checks, foresight, and the absence of an inclusive societal discourse? That’s just silly but not as silly as assigning the same tired essay prompts in 2023.

In Education, Society Tags AI, chatbot

POWERED BY SQUARESPACE