At Stanford University, where I am a senior, tech chief executives are something like rock stars. When the Nvidia founder Jensen Huang showed up to give a guest lecture late last month, students mobbed him. They offered up their laptops and personal workstations, desperate for a signature from a kingpin of the artificial intelligence era. Last year, speaking to the same class, Mr. Huang gave out shining $4,000 graphic cards with his name autographed in gold ink — the ultimate dorm room status symbol.
Stanford has always been a haven for aspiring techies, but recent events have taken the school into uncharted territory. A.I. is everything. We talk about it at the dining halls and in history classes, on dates and while smoking with friends, at the gym and in communal dorm bathrooms. Nearly all of higher education has been overtaken by this technology, and Stanford is a case study in how far it can go. For the past four years, my classmates and I have been the subjects of a high-stakes experiment.
We are the first college class of the A.I. era — ChatGPT arrived on campus about two months after we did. When we graduate next month, this technology will have altered our lives in very different ways. For some, it has opened the door to staggering wealth. But for many who came to Stanford — just four years ago! — when a degree seemed like a guaranteed ticket to a high-paying job, the door has been slammed shut. For all of us, A.I. has permanently changed how we think and behave.
Stanford already had a shaky reputation for integrity when I arrived in 2022. It was the origin place of the Theranos fraudster Elizabeth Holmes (now serving a 10-year prison sentence), the crypto fraudster Do Kwon (now serving a 15-year prison sentence) and the founders of Juul (which was forced to pay billions for getting kids hooked on vapes). All of these scandals were in the news when freshman year began. Many of my classmates arrived idealistic and hopeful, but among the strivers seeking a path to fortune, hustle culture was the accepted way of life. Now A.I. has made deception easier and more remunerative than ever before.
Cheating has become omnipresent. I don’t know a single person who hasn’t used A.I. to get through some assignment in college, yet the school was at first slow to realize how widespread this would become. As freshman year went on, some professors suggested that the “nuclear option” might be called for: allowing faculty to proctor in-person exams, a practice banned at the university for over a century to demonstrate “confidence in the honor” of students.
In our tech-enabled, newly A.I.-powered world, students were increasingly fudging just about everything. They would embezzle dorm funds to spend on their friends and lie about having Covid to get the UberEats credits that the school offered to those in quarantine. Some kids I knew published a paper that claimed a groundbreaking new A.I. advancement. Online sleuths quickly pointed out that it appeared to be just a stolen Chinese model, to which the two Stanford co-authors responded by blaming the plagiarism on the third author.
In junior year, 49 percent of the 849 computer science majors who responded to an annual campus survey said they would rather cheat on an exam than fail. A friend of mine captured the school’s ethos while we were discussing the tech hardware and other items our student club neglected to return to corporate sponsors. It was all, I recall her saying, “just a little bit of fraud.”
About halfway through freshman year, some coding classes started requiring students to sign a declaration — “I did not utilize ChatGPT” — to submit each assignment. During the first term these attestations began to appear, I watched a freshman I knew sign the declaration that he’d done his homework without A.I. as ChatGPT was still open in the next window — while on the deck of a yacht party financed by venture capitalists. The incentive structures were not aligned toward honesty. One could get ahead, quickly, by cutting corners, by focusing on self-presentation.
The money is a big part of it. A.I. has merely accelerated a trend that was already underway at Stanford and has been reflected by many of the country’s most corporatized universities: Education itself can be seen as a secondary goal to enabling future success, frequently defined as a future windfall.
The first time our college class gathered together was for a convocation ceremony in late September 2022. As one of the speakers droned on, I remember looking around and seeing a number of my classmates slumped over in the shade, dozing off. One of those kids is going to become a billionaire soon, it occurred to me. I wondered who it would be, and how.
At first the answer seemed to be cryptocurrency, and then it was A.I.
Most of my friends remember where they were and what they were doing when ChatGPT came out on Nov. 30, 2022. I was nearing the end of my time in Stanford’s infamous computer science “weeder” course, CS107. Like organic chemistry for pre-meds, this was the class that filtered out the true coders from those without the requisite hustle (with lots of shameless public tears involved).
The velocity of change that began on the day ChatGPT entered our lives was stunning. A friend texted me a link to the research preview of OpenAI’s latest demo: “Have you seen this yet? It’s INSANE.” We began kicking around silly prompts, reveling as ChatGPT explained the bubble-sort algorithm “in the style of a fast-talkin’ wise guy from a 1940s gangster movie.” It’s “very good. Very very good,” I messaged my friend. Still, neither of us understood that this would mark the transformation of A.I. from a technology to a product.
Students were probably the earliest wide-scale adopters. After all, it was far and away the quickest route to an A. When I took CS107, the only viable way for people to cheat was to seek out a student who’d gone through the class before and beg for solutions to the notoriously difficult problem sets. There was no alternative to putting in a large amount of work. Even if one did obtain the answers from another student (engaging, by the way, in a social act, if nothing else), the students I knew who did this still spent hours sculpting their stolen code so as not to be caught.
Few cheated in this most overt fashion back then. But a month later, any student could instead turn to a chatbot, plugging in a prompt alone in a dorm room and mindlessly regurgitating the result. “I remember the first time I used it feeling an immediate sense of guilt,” a friend recently told me. “Now it’s just normal.”
Half of the laptops in any lecture seem to be open to ChatGPT or Claude. In the beginning, experimenting with models was a pastime for the nerds; showing off the early access you got to the next frontier large language model was a status symbol, and people would come pleading for your authorization keys to try it out for themselves. In just a few short years, however, A.I. has become a fact of life. “It’s all we talk about,” my ancient Greek art history professor remarked recently.
In April 2026, the proctored exam policy finally went into place. Because of A.I., most of us now take our tests by writing in blue books, like students a century ago, scribbling out answers by hand under keen observation. Meanwhile, we wonder constantly what will happen next.
Many students view these large language models as a job threat. The machines have gotten so much better at coding that junior engineers can’t really compete. A Stanford computer science degree means something very different today from what it did when we set foot on campus — no longer is there a functional guarantee of an entry-level position.
But for those willing to dream up a company with “A.I.” in the name, there is a nearly surefire route to monetary gain. Perplexity, started right when my freshman year began, is an example of a “wrapper” start-up — in other words, a company that does not have its own proprietary A.I. and merely repackages existing models in a different form. It is a search tool, and loses money essentially every time a new user inputs a query. In April 2024, it reached a billion-dollar valuation; two months later, that number tripled. In May 2025, it announced that it was fund-raising at a $14 billion valuation, which had grown to $18 billion by July, and $20 billion by September.
Money in Silicon Valley has become a game of almost meaningless numbers bandied about in a breathtakingly casual manner. It contributes to the whirlpool effect students at Stanford have felt around tech and lucre — if your roommate can drop out and start a nine-figure company, why shouldn’t you profit, too? Why put all your energy into being a student when it seems like everyone around you is getting rich? One time during sophomore year, I was working on homework in my dorm common room with an acquaintance when she offhandedly remarked, “I bought a house in Las Vegas last week.” She continued, “It’s good for taxes.” It’s hard to put your earbuds in and get right back to your problem set when someone says something like that.
Yet the same Stanford dropouts who seem to be making the most money right now are often working on the very technology that is worsening life for their former college classmates.
Emerging research has begun to show what most people feel is obvious: Relying on A.I. for cognitive tasks can reduce one’s own intellectual capacity and resilience. It’s one thing to use it in the workplace, but in the classroom, difficulty is often precisely the point. Sure, a robot can lift 600 pounds much more easily than I can — but that doesn’t much help me if I’m trying to work out. The same goes for the thinking exercise of education. However, telling that to students is about as attractive a message as “eat your veggies” or “sleep eight hours.” It feels like scolding.
Even in the heart of the Silicon Valley techno-utopia, most people know that our tech is bad for us, or at least that it can be. A.I. is often a tremendous productivity boost, yet my friends increasingly refer to both short-form video and their A.I. chat logs in the language of addiction. It’s becoming baked in, shaping our generational character. We are a digital generation, growing only more attached to the virtual world.
The technology behind A.I. is wickedly clever, and back when large language models were still a research experiment — before they propped up the U.S. economy — my friends and I bubbled with excitement. I remember trying to explain to my grandfather, who has since died, that “backpropagation,” a technique vital to A.I., grew out of attempts to quantitatively prove Freud’s theories about the “flow of psychic energy.” I don’t think I really sold Gramps on why he should care — but to me, the development of A.I. was human genius at its finest, and I couldn’t wait to open the arXiv links people would text me containing the latest and greatest research. The output of a model didn’t matter anywhere near as much as how it was designed.
Now, the opposite is true. A.I. is an application that people actually rely on, and companies have become less and less transparent about its design. What counts is the immediate response you receive when you send a reading to ChatGPT to be summarized on your walk to class. Most students call OpenAI’s model “Chat.” Many refer to it familiarly, consulting with Chat repeatedly over the course of a day, letting it decide how to text a situationship and confidently repeating hallucinated assertions while in line at the coffee shop. For years, online livestreamers have used the word “Chat” to interact with their audiences, asking commenters to tell them what choices to make in video games. That students now use the same name for A.I. feels appropriate. What really is the distinction between a nameless, faceless human you’ll never meet except over the internet and a statistical approximation of the same thing?
The internet has already allowed us to feel more connected than ever while becoming lonelier than ever. A.I. lets us cut out the human part of human interaction entirely.
When I was sitting in a recent class on love in French fiction — exactly the kind of course that a senior takes before it all comes to an end — I listened to the first student presentation, entitled: “Applying the Gale-Shapley Algorithm to ‘The Princess of Clèves.’” The enterprising presenters sought to resolve the contretemps of the 1678 romantic novel through a computer science matching algorithm. Love was something to “be optimized.” Next to me, one student scribbled on a branded notepad from Hudson River Trading, a quantitative trading firm where fresh graduates can earn upward of $600,000 a year. Another had a sticker on her laptop: “Practice safe C.S.” The class could not have felt more Stanford.
Living on campus for the past four years has been an eye-opening journey. Higher education was not equipped for the A.I. revolution. Someday in the future the fully autonomous Clawdbots or Moltbots (or whatever people call them) will laugh to themselves about this silly interregnum when universities seemed paralyzed, trying to bridge the gap between the liberal education of yore and the future in which humans have no monopoly on intelligence.
For us, this was college.
