The Supreme Court Declines to Dismantle Democracy - The Court has consigned the independent-state-legislature theory to the “dustbin of history.” But, even as the electoral system remains intact, it’s worth reckoning with its fragility. - link
What Prigozhin’s Half-Baked “Coup” Could Mean for Putin’s Rule - Although the immediate threat of revolt has been extinguished, the episode may embolden future challengers to Russia’s status quo. - link
Does It Matter That Neil Gorsuch Is Committed to Native American Rights? - The Justice doesn’t just join with the liberals on the bench when it comes to tribal rights; he often seems to lead them. - link
Why Donald Trump Was So Mad at Mark Milley That He Confessed to a Crime - The backstory on the tape that could get the ex-President convicted in the classified-documents case. - link
Joe Biden Tries to Change the Narrative on the Economy - The President gave a speech touting his economic record, which is stronger than he has been given credit for. - link
The hard problem of faked data in science.
Francesca Gino is a Harvard Business School professor who studies, among other things, dishonesty. How often do people lie and cheat when they think they can get away with it? How can people be prompted to lie or cheat less often?
Those are some great questions. But it’s been a rough few years for the field of dishonesty studies because it has turned out that several of the researchers were, well, making up their data. The result is a fascinating insight into dishonesty, if not the one that the authors intended.
This story starts with a 2012 paper about academic dishonesty co-authored by Gino. The paper claimed that if you ask people to sign an honesty commitment before doing a project they have the opportunity to cheat on, they’re much less likely to cheat compared to if they sign the honesty pledge at the end of the experiment.
“Signing before — rather than after — the opportunity to cheat makes ethics salient when they are needed most and significantly reduces dishonesty,” the paper claimed. It featured three different experiments: two in a lab setting and one field experiment with reporting odometer mileage when applying for car insurance.
In 2021, that paper was retracted when it turned out the data from the third experiment — the one about the car insurance — didn’t add up. Other researchers tried to replicate the paper’s eye-popping results and ran into a bunch of inconsistencies.
The spotlight then quickly fell on one of the paper’s authors, Dan Ariely, a behavioral economist at Duke University and the author of The Honest Truth About Dishonesty. Ariely admitted that he “mislabeled” some data but denied that he deliberately falsified anything, proposing it may have been falsified by the insurance company he partnered with. But records show that he was the last to modify the spreadsheet in which the falsified data appeared.
That seemed to be the end of it. With the paper more than a decade old, it’d be hard to reach any definitive conclusions about what exactly happened. But it turns out that it was only the beginning. In a report published last week, a team of independent investigators laid out their evidence that there was actually a lot more fraud in the academic dishonesty world than that.
“In 2021, we and a team of anonymous researchers examined a number of studies co-authored by Gino, because we had concerns that they contained fraudulent data,” the new report begins. “We discovered evidence of fraud in papers spanning over a decade, including papers published quite recently (in 2020).”
Gino has been placed on administrative leave at Harvard Business School, and Harvard has requested that three more papers be retracted. In a statement on LinkedIn, Gino said: “As I continue to evaluate these allegations and assess my options, I am limited into what I can say publicly. I want to assure you that I take them seriously and they will be addressed.”
I highly recommend the series of blog posts in which the report authors explain, paper by paper, how they detected the cheating. Some impressive work went into proving not just that the data must have been tampered with, but that the tampering was deliberate. The investigators used Microsoft Excel’s version control features to demonstrate that the initial versions of the data looked quite different and that someone went in and changed the numbers.
Take that 2012 study I mentioned above. The third experiment, the insurance fraud one, had data that appeared fabricated. But when researchers looked more closely, so did the first and second experiments. Gino was entirely responsible for data collection for the first experiment and is the one suspected of having a hand in its fabrication. But she had nothing to do with the data collection for the third experiment.
This, of course, means that it looks like that single 2012 paper on dishonesty had two different people fabricate data in order to get a publishable result.
There’s a lot of discussion about the pressure to publish in academia and how it can lead to bad statistical practices aimed at fishing for a good p-value, or pumping up a result as much more impactful and important to the field than it really is.
There’s less discussion of actual straight-up fraud, even though it’s disturbingly common and can have a huge impact on our understanding of a subject. Early in the Covid-19 pandemic, bad claims about treatments popped up thanks to fraudulent studies and then took lots of good research to disprove.
The problem is that our peer review process isn’t very well suited to looking for outright, purposeful fabrication. We can reduce many kinds of scientific malpractice by preregistering studies, being willing to publish null results, looking out for irresponsibly testing lots of hypotheses without appropriate statistical corrections, and so on. But that does nothing against someone who just switches data points from the control group to the experimental group in an Excel spreadsheet, which is what it appears Gino did.
That’s not to say that frauds can’t be caught. One huge thing that I hear about from experts every single time I cover a scientific fraud case: Publishing the data is the way the fraud gets detected. It’s not that hard to manipulate some numbers, but it’s hard to do it without a trace. Many of the fraud cases highlighted by the team investigating Gino are downright clumsy.
Some journals now enforce an expectation that you publish your data when you publish your research. Some academics hesitate — it takes a lot of work to build a dataset, and they may want to write more papers using the same data and not be scooped by other researchers — but I think the pros of a policy about data publishing strongly outweigh the cons. It’s bad for everyone when fraudulent science gets published. It’s an injustice to scientists who are really doing the work but can’t manufacture such clean and eye-popping results. In cases like Covid-19, it resulted in research funding being badly directed and people taking medications that couldn’t help them.
Amazon and Walmart are a little bit evil and make us a little bit evil, too.
It’s Amazon’s world, and we’re just living in it. Or Walmart’s. Or really, actually, both.
Many Americans like to think of themselves as conscientious consumers — as the types of people who shop their values, support small businesses, and generally try to do the right thing when they buy. We also all live in reality, where people are busy, our funds are limited, and convenience is really nice. Many of us know that buying shampoo at the local pharmacy would be the better option, but it’s 20 minutes away, and what if, once we get there, it’s locked up? So we place an order on Amazon and move on. We’re well aware we could go to any number of stores for a new bath mat and holiday decorations and back-to-school gear, but we also know we get them all at Walmart for less. We appreciate that; we just don’t appreciate thinking about how little they pay their employees.
Amazon and Walmart are fixtures of commerce in the United States today, retail behemoths that have a stronghold on what consumers buy, online and off. We have played a key role in helping them get there, often neglecting to weigh the trade-offs we make when we resort to them when we shop.
The two companies are fierce competitors, and the competition between them has led to a race to the bottom on pricing, speed, and ruthlessness in an effort for one to come out on top. Their rivalry — and its implications — is the topic of the recently released Winner Sells All: Amazon, Walmart, and the Battle for Our Wallets by veteran business reporter (and my former Vox colleague) Jason Del Rey.
I recently spoke with Del Rey about the grip Amazon and Walmart have on the American economy, the trade-offs they (and, ultimately, we) make for them to run their businesses, and what, if anything, poses a threat to these companies. We also got into how Amazon has managed to take over from Walmart as the Bad Guy in retail — even though nobody’s really a hero here.
This conversation has been edited for length and clarity.
How much of a hold do Amazon and Walmart have on us, as consumers? Like, are there any legitimate competitors?
They have a huge hold on us. What Amazon has done over the years with Prime is a profound change that, by now, goes overlooked. It makes it so a lot of us feel like we can order one product at a time from this magic screen, and that it’s sustainable for business in this country and for the world that it will arrive that day or tomorrow. And that’s the way shopping is today. The effect of Amazon Prime on shopping habits just can’t be overstated enough.
What Walmart’s done, and they’re not the only ones, is they’ve drilled home the idea that what we should value above all else is the lowest possible cost for a good. There are plenty of reasons why tens of millions of people base their shopping on that — they have no choice. But my fear is just within the rivalry that there’s a constant race to the bottom that might be good for each of us in the short term, but long-term poses a whole slew of problems.
It feels to me that part of the story is that these two companies have sort of out-terribled each other over the years to try to compete, constantly undercutting one another and competitors to be faster and cheaper. Do you think that’s a fair assessment?
Listen, I can understand how that can be the view, and there are many times in my reporting history on them that that’s how I felt. If I take a step back, I look at it with a little more nuance.
They have each had, at different times and to different levels, negative impacts on small businesses and working-class people in this country, and they deserve to continue to have scrutiny placed on them for that. It may very well take government intervention to quote-unquote “fix” some of those issues. Putting that aside, it’s hard for me to look at events like the pandemic and the role each of them played in different ways in helping large masses of people get by on a day-to-day basis and think it’s all bad.
Now, why were they in that position to be almost utilities in the first place? A lot of competition has gone away over the years. Is that government’s fault? Do we wish there were better people leading these companies that took a different path? There are a bunch of different answers.
They have played the role of doing what this country has rewarded big corporations for doing in the last few decades, and that’s showing growth at all costs. Everything else is an afterthought.
Is it possible to compete with Amazon and Walmart in the current landscape?
If you’re going head-on at them in industries or categories that are core to their business, it has been very difficult to compete with them at scale. You can carve off customers on the fringes, but if you’re talking about building a multibillion-dollar business doing something that they do well or they care about, it has been next to impossible. There are a few exceptions. I think about Chewy in the pet product category. I don’t know what Chewy’s market cap is today, but it’s a considerable-sized business that has found success.
In decades past, if you went head-to-head, Amazon and Walmart were going to eventually crush you or buy you.
So really, really hard to try to compete, even with possible antitrust action from government regulators swirling around them?
In the future, I think a lot of it will depend on whether there is some type of government intervention. I’m mainly talking about Amazon here, because the days of Walmart being at risk of any antitrust scrutiny feel like they’re gone — much to the frustration of Amazon leadership.
They have both failed a decent amount when it comes to them trying to enter a new space that has successful incumbents.
Like what?
Health care is one space they’ve both had challenges in. Amazon’s trying to buy their way in with their acquisition of One Medical [a primary care provider].
Amazon’s physical retail initiatives, including buying Whole Foods, have largely been, for them, huge disappointments. When I talk to former Amazonians, as they call themselves, they think of themselves as technologists. They just get bored as hell working on physical retail, and I think that showed.
So maybe you can compete with them in things that they’re bad at, or at something else in the future we don’t see yet.
You can look at the apparel and accessories space, companies like Shein and Temu; those companies are growing very quickly and have extremely low prices. I’m sure Amazon’s paying attention as they once did with Wish.com, but I’m skeptical about the long-term sustainability of those businesses and those models.
I hope there’s a company that doesn’t exist today or isn’t getting attention, and in 10, 20 years from now, we are talking about them in this space. Walmart and Amazon are just so entrenched, especially in their combined size in online retail. Even with the slowdown in the last year or two, it’s hard to see real threats to them — unless it’s a company coming from a different angle, maybe a Shopify in software or a TikTok.
Walmart was the original Big Bad in terms of taking out competitors, killing off local retailers, underpaying workers, etc. And then Amazon came along and their positions kind of shifted, at least reputationally. What’s changed?
In the early 2000s, Walmart leadership, including their CEO, essentially made nice with some of their biggest critics — on the environmental front, with critical journalists. It was self-serving, yes, but they went out and took the time to actually meet with and listen to critics, and I think that actually worked for them. Whether that was them really changing their business for moral or good reasons at the time, undertaking some environmental or green initiatives, or whether it was all about PR, it kind of worked.
Amazon has never accepted or had the self-awareness of “maybe these people who say bad things about us have a point.” Folks who have been at the company in recent years talk about this extreme lack of self-awareness that they’re not just a startup out to do good anymore and that when they make decisions, whether it’s labor decisions or partner decisions with the small businesses that sell on their marketplace, these tiny tweaks have massive impacts. All the way up to the top of the company, they have a hard time accepting criticism. They are largely thin-skinned and think they’re misunderstood. It definitely has an effect in Washington, DC, and it makes a lot of critics dig in their heels.
So Amazon’s just kind of bad at the PR part at this point?
Amazon is Walmart 2.0 or 3.0. Walmart, on the fringes, has made some changes to better satisfy some critics, but at their core, there’s a lot of similar, justified criticisms of them that Amazon mostly takes the brunt of.
One big difference, as smart Amazon critics lay out, is Amazon is doing so much in so many different ways across the internet, essentially laying the pipes and then putting up the tollbooth so that their power feels much more dangerous and harder to break. I think that’s why you see in DC and in antitrust circles the infrastructure they’re trying to control from AWS [Amazon Web Services, their cloud computing service] to the advertising platform to all the other fees they charge to their partners. It makes them seem like a different type of danger to competition.
Having Jeff Bezos as the world’s richest man for a long time only hurt them as well, which is ironic when you consider what the Walton family’s combined wealth is.
I wouldn’t understate or undersell how much Jeff Bezos putting himself into the media over the last three or four years has hurt the company in that it has brought more overall attention to him and his wealth.
I’m sure [Amazon CEO] Andy Jassy would love to sit down with you and make the case that Walmart should get a lot more scrutiny. But it does feel, despite their longstanding union beliefs, like they’ve escaped their darkest days of criticisms. I wouldn’t say they’re beloved, but they’re not viewed as poorly as Amazon is in some circles.
Right. Walmart is no longer the boogeyman.
One more thing: There are a lot of people who have issues with Amazon, but I’m no longer surprised how many people do love the company, or love the service. I don’t know if your readers do, or I think a lot of them won’t admit it, but they do.
Well, that’s always the thing. You see the media and certain critics saying everybody hates Big Tech, Amazon’s evil. And then you look at brand favorability, and Amazon is one of the most popular brands in the country. It’s a good service.
I think it’s gotten worse, but yeah.
How do you think about some of the trade-offs both of these companies make — and have consumers make — in order to get people stuff fast and at super-low prices? What does that mean for how they pay and treat their employees so consumers can have low-priced convenience?
There are some really shitty trade-offs. I used to think each of us is to blame for those trade-offs, and I kind of feel that way sometimes, but there are all these reasons why these companies were allowed to become as entrenched and as hard to compete with as they are. Is it the average person’s fault, day to day, doing what they think is the most convenient thing for their lives?
In our house, we are customers of Amazon, of Walmart, of Target and Trader Joe’s and the local coffee shops and restaurants. On a personal level, we try to think about whether we actually need that thing the next day or two days later. Sometimes, it feels like yes, and we’ll still place that order with that service. We’ve gotten better about taking other paths when we don’t. But these companies have convinced us that we need everything within one or two days.
Jet.com, an e-commerce company that briefly existed as an independent company [before being acquired by Walmart], had this idea that they would kick you back some savings if you waited a little longer. That would take some costs out of the logistics part of the business, and oh, by the way, maybe that’s a little better for employees and the environment, too. I wanted to believe all of that could work because it seemed the direction we were heading was a really bad one for everyone except the executives, but that model ended up not working for a variety of reasons.
I’m hopeful there’s a world where we can turn back the clock, but I think a lot of people are addicted to convenience and have convinced themselves it saves time that they use in better ways. Instead, we’re fucking scrolling TikTok.
We’re headed in a direction where there’s absolutely more automation going into these businesses. Walmart and Amazon will make the argument that workers are not going away, we are making their lives and work better and taking away the worst work. But there are also really negative consequences when you add automation to the work, such as quotas for workers being increased.
I’m really hoping there are entrepreneurs out there who can somehow convince us that we don’t need things the same day or the next day, or there’s a sustainable business model to really make a go at a new type of service. It may be just too late, which is dark and depressing.
Once these companies offer one-day shipping or two-day shipping or whatever, doesn’t everybody else have to in order to even try to compete or keep up?
The ripple effects of everyone else following them are real. For someone to choose a different path or a different way, it has to be a very, very young company that is sort of naive and has nothing to lose because they only have three employees. That seems, to me, to be just about it.
What you do with Amazon is actually not shopping; it’s buying. Largely, it’s a very transactional relationship that people have with it. Every so often, there’s an Amazon dress or Amazon whatever, but usually, you’re being sent to the site for some very specific reason. You’re transacting, you’re fulfilling some desire or need. But the idea that you’re browsing or enjoying yourself or window shopping or getting some kind of entertainment value is something they’ve been terrible at. And they’ve tried for years. They’re best at selling stuff that lends itself to transactions and not shopping.
Is there a competitor that upends them by taking the shopping approach? I don’t know. When the stuff we buy most frequently, like groceries or consumer goods or clothing basics, is stuff you don’t need to do much browsing to feel good about your buying decision.
I’ve never thought about it, but it does feel like every time it’s the holiday season I wind up on Amazon looking for stuff to buy and within two minutes am like okay, this is pointless, I can’t do anything, and log off.
The site is not a fun one to move around on, and it doesn’t look all that different from 15 years ago, but they still feel so entrenched that it’s reflex or comfort just to turn to them.
If we think that Amazon and Walmart do have too much control, what’s the biggest threat to them? Unions? Government regulation? Is there a threat at all?
On the Amazon side, they are at this inflection point with layoffs and cost-cutting and a pullback in some areas where they were heavily investing, such as Alexa. Morale is not good there. It feels, in some corners, like a boring, mature company. One big risk to them is bigness and moving slower than they have in their early years, when really their biggest advantage was speed. They were able to roll stuff out and test it for a variety of reasons. Wall Street gave them a long leash, so they didn’t have to worry about profitability. Their bigness is a threat.
On unionization, I’m more skeptical today than I was a year ago that there’s a real, sustainable drive that’s going to make a difference in working conditions there. Maybe if the Teamsters make up more ground than they have to date, they could make a difference, but I’m skeptical of that.
I, like a lot of people, am waiting for this long-rumored FTC antitrust lawsuit to drop [separate from the Prime cancellation lawsuit that the Federal Trade Commission recently filed], and that could have an impact. But it also could be a five- to seven-year process. How much ground they gain in five to seven years as a technology company of their size, I don’t know; it in some ways seems like a hopeless fight. I look at Amazon and think the biggest threat to themselves today really is themselves.
With Walmart, it has taken so much effort and time for them to get to a place where their online services are not awful. They are more committed than they have ever been to try to meet shoppers where they want to be met, when they want to be met, with the products they want to buy, but man, it’s taken a long time and a lot of money. The CEO told me it’s gotten better, but he’s still dissatisfied.
There was a time where Amazon was a real existential threat to Walmart because of how much Walmart was ignoring them, but now it just feels like perhaps they’ll stay in this position of a much smaller number two in e-commerce and kind of be content with that.
Walmart stores, especially as they try to convert them, finally, into mini warehouses and robot headquarters, are not going away. That’s still where so much of their business is done that they’re going to do okay for themselves for a while.
So, for now, we’re just stuck with Amazon and Walmart and being a little complicit in the meantime?
These are complicated companies with complicated impacts. I’d love to say there’s a world where someone upends them because they are big and bad and do a lot of harmful things to partners and employees, but I’ve accepted that they’re not going anywhere and we’ve got to make do with them and then hope on the fringes competitors can gnaw away. Maybe, if they are in fact breaking the law, they can be forced to change the way they do business, but I’m not holding out much hope on any of that.
We live in a world that’s constantly trying to sucker us and trick us, where we’re always surrounded by scams big and small. It can feel impossible to navigate. Every two weeks, join Emily Stewart to look at all the little ways our economic systems control and manipulate the average person. Welcome to The Big Squeeze.
Sign up to get this column in your inbox.
Have ideas for a future column or thoughts on this one? Email emily.stewart@vox.com.
What the history of nuclear arms can — and can’t — tell us about the future of AI.
If you spend enough time reading about artificial intelligence, you’re bound to encounter one specific analogy: nuclear weapons. Like nukes, the argument goes, AI is a cutting-edge technology that emerged with unnerving rapidity and comes with serious and difficult to predict risks that society is ill-equipped to handle.
The heads of AI labs OpenAI, Anthropic, and Google DeepMind, as well as researchers like Geoffrey Hinton and Yoshua Bengio and prominent figures like Bill Gates, signed an open letter in May making the analogy explicitly, stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Oppenheimer director Christopher Nolan, by contrast, doesn’t think AI and nukes are very similar. The Making of the Atomic Bomb author Richard Rhodes thinks there are important parallels. The New York Times ran a quiz asking people if they could distinguish quotes about nuclear weapons from quotes about AI. Some policy experts are calling for a Manhattan Project for AI, just to make the analogy super-concrete. Anecdotally, I know tons of people working on AI policy who’ve been reading Rhodes’s book for inspiration. I recently saw a copy on a coffee table at Anthropic’s offices, when I was visiting there for a reporting trip.
It’s easy to understand why people grasp for analogies like this. AI is a new, bewildering technology that many experts believe is extremely dangerous, and we want conceptual tools to help us wrap our heads around it and think about its consequences. But the analogy is crude at best, and there are important differences between the technologies that will prove vital in thinking about how to regulate AI to ensure it’s deployed safely, without bias against marginalized groups and with protections against misuse by bad actors.
Here’s an incomplete list of ways in which the two technologies seem similar — and different.
In December 1938, the chemists Otto Hahn and Fritz Strassmann found that if they bombarded the radioactive element uranium with neutrons, they got what looked like barium, an element much smaller than uranium. It was a baffling observation — radioactive elements had to that point only been known to emit small particles and transmute to slightly smaller elements — but by Christmas Eve, their collaborators, the physicists Lise Meitner and Otto Frisch, had come up with an explanation: the neutrons had split the uranium atoms, creating solid barium and krypton gas. Frisch called the process “fission.”
On July 16, 1945, after billions of dollars of investment and the equivalent of 67 million hours of labor from workers and scientists including Frisch, the US military detonated the Trinity device, the first nuclear weapon ever deployed, using the process that Frisch and Meitner had only theorized less than seven years earlier.
Few scientific fields have seen a theoretical discovery translated into an immensely important practical technology quite that quickly. But AI might come close. Artificial intelligence as a field was born in the 1950s, but modern “deep learning” techniques in AI, which process data through several layers of “neurons” to form artificial “neural networks,” only took off with the realization around 2009 that specialized chips called graphics processing units (GPUs) could train such networks much more efficiently than standard central processing units (CPUs) on computers. Soon thereafter, deep learning models began winning tournaments testing their ability to categorize images. The same techniques proved able to beat world champions at Go and StarCraft and produce models like GPT-4 or Stable Diffusion that produce incredibly compelling text and image outputs.
Progress in deep learning appears to be roughly exponential, because the computing resources and data applied to it seem to be steadily growing. The field of model scaling estimates what happens to AI models as the data, computing power, and number of parameters available to them are expanded. A team at the Chinese tech giant Baidu demonstrated this in an empirical paper in 2017, finding that “loss” (the measured error of a model, compared to known true results, on various tasks) decays at an exponential rate as the model’s size grows, and subsequent research from OpenAI and DeepMind has reached similar findings.
All of which is to say: much as nuclear fission developed astonishingly quickly, advanced deep learning models and their capabilities appear to be improving at a similarly startling pace.
I presume I do not need to explain how nuclear weapons, let alone the thermonuclear weapons that make up modern arsenals, can cause mass harm on a scale we’ve never before experienced. The same potential for AI requires somewhat more exposition.
Many scholars have demonstrated that existing machine learning systems adopted for purposes like flagging parents for Child Protective Services often recapitulate biases from their training data. As these models grow and are adopted for more and more purposes, and as we grow increasingly dependent on them, these kinds of biases will prove more and more consequential.
There is also substantial misuse potential for sufficiently complex AI systems. In an April paper, researchers at Carnegie Mellon were able to stitch together large language models into a system that, when instructed to make chlorine gas, could figure out the right chemical compound and instruct a “cloud laboratory” (an online service where chemists can conduct real, physical chemistry experiments remotely) to synthesize it. It appeared capable of synthesizing VX or sarin gas (as well as methamphetamine) and only declined due to built-in safety controls that model developers could easily disable. Similar techniques could be used to develop bioweapons.
Much of the information needed to make chemical or biological weapons is available publicly now, and has been for some time — but it requires specialists to understand and act on that information. The difference between a world where laypeople with access to a large language model can build a dangerous bioweapon, and a world where only specialists can, is somewhat akin to the difference between a country like the US where large-capacity semiautomatic guns are widely available and a country like the UK where access to such weapons is strictly controlled. The vastly increased access to these guns has left the US a country with vastly higher gun crime. LLMs could, without sufficient controls, lead to a world where the lone wolves who currently kill through mass shootings in the US instead use bioweapons with the potential to kill thousands or even millions.
Is that as bad as nuclear weapons? Probably not. For that level of harm you need AI takeover scenarios which are necessarily much more speculative and harder to reason about, as they require AIs vastly more powerful than anything that exists today. But the harms from things like algorithmic bias and bioweapons are more immediate, more concrete, and still large enough to demand a lot of attention.
I do not use nuclear weapons in my everyday life, and unless you’re in a very specific job in one of a handful of militaries, you probably don’t either. Nuclear fission has affected our everyday lives through nuclear energy, which provides some 4 percent of the world’s energy, but due to its limited adoption, that technology hasn’t exactly transformed our lives either.
We don’t know with any specificity how AI will affect the world, and anyone who tells you what’s about to happen in much detail and with a great deal of confidence is probably grifting you. But we have reason to think that AI will be a general-purpose technology: something like electricity or telegraphy or the internet that broadly changes the way businesses across sectors and nations operate, as opposed to an innovation that makes a dent in one specific sector (as nuclear fission did in the energy sector and in military and geopolitical strategy).
Producing text quickly, as large language models do, is a pretty widely useful service for everything from marketing to technical writing to internal memo composition to lawyering (assuming you know the tech’s limits) to, unfortunately, disinformation and propaganda. Using AI to improve services like Siri and Alexa so they function more like a personal assistant, and can intelligently plan your schedule and respond to emails, would help in many jobs. McKinsey recently projected that generative AI’s impact on productivity could eventually add as much as $4.4 trillion to the global economy — more than the annual GDP of the UK. Again, take these estimates with a large grain of salt, but the point that the technology will be broadly important to a range of jobs and sectors is sound.
Banning nuclear fission would probably be a bad idea — nuclear power is a very useful technology — but humans have other sources of energy. Banning advanced AI, by contrast, is clearly not viable, given how broadly useful it could be even with the major threats it poses.
When the theoretical physicist Niels Bohr first theorized in 1939 that uranium fission was due to one specific isotope of the element (uranium-235), he thought this meant that a nuclear weapon would be wholly impractical. U235 is much rarer than the dominant uranium-238 isotope, and separating the two was, and remains, an incredibly costly endeavor.
Separating enough U235 for a bomb, Bohr said at the time, “can never be done unless you turn the United States into one huge factory.” A few years later, after visiting Los Alamos and witnessing the scale of industrial effort required to make working bombs, which at its peak employed 130,000 workers, he quipped to fellow physicist Ed Teller, “You see, I told you it couldn’t be done without turning the whole country into a factory. You have done just that.”
Separating out uranium in Oak Ridge, Tennessee, was indeed a massive undertaking, as was the parallel effort in Hanford, Washington, to produce plutonium (the Hiroshima bomb used the former, the Trinity and Nagasaki bombs the latter). That gave arms control efforts something tangible to grasp onto. You could not make nuclear weapons without producing large quantities of plutonium or enriched uranium, and it’s pretty hard to hide that you’re producing large quantities of those materials.
A useful analogy can be made between efforts to control access to uranium and efforts to control access to the optimized computer chips necessary to do modern deep learning. While AI research involves many intangible factors that are difficult to quantify — the workforce skill needed to build models, the capabilities of the models themselves — the actual chips used to train models are trackable. They are built in a handful of fabrication plants (“fabs”). Government agencies can monitor when labs are purchasing tens or hundreds of thousands of these chips, and could even mandate firmware on the chips that logs certain AI training activity.
That’s led some analysts to suggest that an arms control framework for AI could look like that for nuclear weapons — with chips taking the place of uranium and plutonium. This might be more difficult for various reasons, from the huge amount of international cooperation required (including between China and Taiwan) to the libertarian culture of Silicon Valley pushing against imprinting tracking info on every chip. But it’s a useful parallel nonetheless.
As early as 1944, Niels Bohr was holding meetings with Franklin Roosevelt and Winston Churchill and urging them in the strongest terms to tell Joseph Stalin about the atomic bomb project. If he found out through espionage, Bohr argued, the result would be distrust between the Allied powers after World War II concluded, potentially resulting in an arms race between the US/UK and the Soviet Union and a period of grave geopolitical danger as rival camps accumulated mass nuclear arsenals. Churchill thought this was absurd and signed a pledge with Roosevelt not to tell Stalin.
The postwar arms race between the US and the Soviet Union proceeded much as Bohr predicted, with Churchill’s nation as an afterthought.
The historical context behind AI’s development now is much less fraught; the US is not currently in an alliance of convenience with a regime it despises and expects to enter geopolitical competition with as soon as a massive world war concludes.
But the arms race dynamics that Bohr prophesied are already emerging in relation to AI and US-Chinese relations. Tech figures, particularly ex-Google CEO Eric Schmidt, have been invoking the need for the US to take the lead on AI development lest China pull ahead. National security adviser Jake Sullivan said in a speech last year that the US must maintain “as large of a lead as possible” in AI.
As my colleague Sigal Samuel has written, this belief might rest on misconceptions that being “first” on AI matters more than how one uses the technology, or that China will leave its AI sector unregulated, when it’s already imposing regulations. Arms races, though, can be self-fulfilling: if enough actors on each side think they’re in an arms race, eventually they’re in an arms race.
The vast majority of nations have declined to develop nukes, including many wealthy nations that easily have the resources to build them. Partially this limited proliferation is due to the fact that building nuclear weapons is fundamentally hard and expensive.
The International Campaign to Abolish Nuclear Weapons estimates that ultra-poor North Korea spent $589 million on its nuclear program in 2022 alone, implying it has spent many billions over the decades the program has developed, Most countries do not want to invest those kinds of resources to develop a weapon they will likely never use. Most terrorist groups lack the resources to build such a weapon.
AI is difficult and costly to train — but relative to nukes, much easier to piggyback off of and copy once some company or government has built a model. Take Vicuna, a recent language model built off of the LLaMA model released by Meta (Facebook’s parent company), whose internal details were leaked to the public and are widely available. Vicuna was trained using about 70,000 conversations that real users had with ChatGPT which, when used to “fine tune” LLaMA, produced a much more accurate and useful model. According to its creators, training Vicuna cost $300, and they argue its output rivals that of ChatGPT and its underlying models (GPT-3.5 and GPT-4).
There are lots of nuances here that I’m glossing over. But the capability gap between hobbyist and mega-corporation is simply much smaller in AI than it is in nukes. A team of hobbyists trying to develop a nuclear weapon would have a much easier job than the Manhattan Project did, simply because they can benefit from everything the latter, and every nuclear project since, has learned. But they simply could not build a working nuclear device. People with minimal resources can build and customize advanced AI systems, even if not cutting-edge ones, and will likely continue to be able to do so.
One expert I spoke to when thinking about this piece said bluntly that “analogies are the worst form of reasoning.” He has a point: one of my own takeaways from considering this particular analogy is that it’s tempting in part because it gives you a lot more historical material to work with. We know a lot about how nuclear weapons were developed and deployed. We know very little about how the future development and regulation of AI is likely to proceed. So it’s easier to drone on about nukes than it is to try to think through future AI dynamics, because I have more history to draw upon.
Given that, my main takeaway is that glib “AI=nukes” analogies are probably a waste … but more granular comparisons of particular processes, like the arms race dynamics between the US and Soviets in the 1940s and the US and China today, can possibly be fruitful. And those comparisons point in a similar direction. The best way to handle a new, powerful, dangerous technology is through broad international cooperation. The right approach isn’t to lie back and just let scientists and engineers transform our world without outside input.
France’s highest administrative court says the soccer federation can ban headscarves in matches - The Council of State issued its ruling after a collective of headscarf-wearing soccer players called “Les Hijabeuses” campaigned against the ban.
ARCHERY | Abhishek Verma’s scintillating form a good omen ahead of big ticket events - The experienced compound archer, armed with a positive mindset and a new bow set, secured his third individual gold medal recently in Colombia
Isnt She Beautiful, Gimmler, Adbhut and Fast Rain please -
Indian hockey hires mental trainer Paddy Upton for Men’s Asian Champions Trophy - Upton was part of the staff en route to Indian cricket team’s 2011 World Cup triumph
Rajasthan Royals set to offer Jos Buttler lucrative multi-year contract - “It is understood that the offer to Buttler is yet to be formally tabled, and its unclear whether the T20 World Cup winning captain intends to accept the deal,” a British newspaper reported
One killed as ‘rioters’ open unprovoked firing in Manipur’s Kangpokpi - While the local army unit tweeted that “unconfirmed reports” indicated some casualties in the incident, official sources said one body had been recovered from the area and a few others could be seen lying on the ground
Here are the big stories from Karnataka today - Welcome to the Karnataka Today newsletter, your guide from The Hindu on the major news stories to follow today. Curated and written by Nalme Nachiyar.
Religious fervour, charity mark Bakrid in Telangana -
Kerala scales up Gulf campaign to promote monsoon tourism - As a prelude, Kerala Tourism showcased a wide range of its products and themes in Dubai last month during the 30th edition of Arabian Travel Market
Rosemala to welcome visitors from July 1 -
What we know about killing of Paris teen… in 55 seconds - The BBC’s Hugh Schofield investigates at the scene where police shot dead 17-year-old Nahel.
France shooting: Who was Nahel M, shot by police in Nanterre? - He was learning to be an electrician and played rugby league but died at a police check near Paris.
Paris shooting: Why French government backed family so fast - President Macron angered French police unions by criticising the shooting of a 17-year-old boy.
‘Mum was sent to Ireland, shamed and had to give me up’ - A woman sent from Britain to Ireland as a baby speaks about repatriations of unmarried Irish mothers.
Ukraine war: Countdown has begun to end of Putin, say Kyiv officials - Senior officials suggest the Russian leader cannot survive a catastrophic loss of authority.
Speed matters: How Ethernet went from 3Mbps to 100Gbps… and beyond - One of the biggest computing inventions of all time, courtesy of Xerox PARC. - link
US public wants climate change dealt with, but doesn’t like the options - People want both action and to keep using fossil fuels. - link
Brave aims to curb practice of websites that port scan visitors - Brave will allow users to choose which sites can access local network resources. - link
NANOGrav hears “hum” of gravitational wave background, louder than expected - Exotic stars called millisecond pulsars serve as celestial metronomes. - link
Medical waste company sues health system over hidden human torso - The suit also alleges deceit, staged photos, and hidden hazardous waste. - link
Forging A Return To Productive Conversation: An Open Letter to Reddit -
To All Whom It May Concern,
For fifteen years, /r/Jokes has been one of Reddit’s most-popular communities. That time hasn’t been without its difficulties, but for the most part, we’ve all gotten along (with each other and with administrators). Members of our team fondly remember Moderator Roadshows, visits to Reddit’s headquarters, Reddit Secret Santa, April Fools’ Day events, regional meetups, and many more uplifting moments. We’ve watched this platform grow by leaps and bounds, and although we haven’t been completely happy about every change that we’ve witnessed, we’ve always done our best to work with Reddit at finding ways to adapt, compromise, and move forward.
This process has occasionally been preceded by some exceptionally public debate, however.
On June 12th, 2023, /r/Jokes joined thousands of other subreddits in protesting the planned changes to Reddit’s API; changes which – despite being immediately evident to only a minority of Redditors – threatened to worsen the site for everyone. By June 16th, 2023, that demonstration had evolved to represent a wider (and growing) array of concerns, many of which arose in response to Reddit’s statements to journalists. Today (June 26th, 2023), we are hopeful that users and administrators alike can make a return to the productive dialogue that has served us in the past.
We acknowledge that Reddit has placed itself in a situation that makes adjusting its current API roadmap impossible.
However, we have the following requests:
Reddit is unique amongst social-media sites in that its lifeblood – its multitude of moderators and contributors – consists entirely of volunteers. We populate and curate the platform’s many communities, thereby providing a welcoming and engaging environment for all of its visitors. We receive little in the way of thanks for these efforts, but we frequently endure abuse, threats, attacks, and exposure to truly reprehensible media. Historically, we have trusted that Reddit’s administrators have the best interests of the platform and its users (be they moderators, contributors, participants, or lurkers) at heart; that while Reddit may be a for-profit company, it nonetheless recognizes and appreciates the value that Redditors provide.
That trust has been all but entirely eroded… but we hope that together, we can begin to rebuild it.
In simplest terms, Reddit, we implore you: Remember the human.
We look forward to your response by Thursday, June 29th, 2023.
There’s also just one other thing.
submitted by /u/JokeSentinel
[link] [comments]
A man is sitting on a flight from NYC to London -
He feels a little cold, so he asks the cabin attendant for a blanket. The cabin crew completely ignores him. On the seat next to him is no other than a parrot. The parrot screams “get me a scotch on the rocks you stupid cunt”. Not a moment passes and the parrot gets a nice glass of whiskey. The man asks for a blanket again only to be ignored. “Hey, old cow” yells the parrot “where’s my snacks?” Peanuts, cashews and salted almonds find themselves immediately on the parrot’s tray. The man gives up “I’m freezing you stupid bitch. What the hell do I need to do to get a fuckin’ blanket on this shit of a flight?!” The flight attendant says something into a comm system and a big man comes, opens the door at 37,000ft and throws both the man and the parrot out of the plane. On the way down, the parrot takes a good look at the man and says: “you know something? You’re pretty brave for someone with no wings”
submitted by /u/kfiri
[link] [comments]
A man goes down to a ranch to look at a horse -
The rancher brings out a beautiful mare.
“Can I see her teeth?” The man asks nicely.
“Sure thing!” Says the rancher and opens her lips to show off her perfect teeth.
“Bautiful! Can I see her tail and hooves?” The man asks.
“By all means, partner!” Replied the rancher and turns her around to show her expertly manicured back left hoof and braided tail.
“Lovely!” The man exclaimed “Now, can I see her twat?”
“WHAT?!” Asked the rancher sharply.
“Her twat, sir.” The man said again “Can I see her twat?”
The rancher gets furious, grabs the man by the neck, lifts the horses tail, shoves the man’s face into the mare’s rear and shouts “Get a good look pervert!”
“I dont know why you did that!” Huffed the man exasperated, “All I weawy wanted was to see her wun!”
submitted by /u/headexpl0dy
[link] [comments]
Women say they like a man who is “funny” and “spontaneous” -
But you knock on their bedroom window at midnight wearing a clown costume and suddenly it’s all screaming and throwing things and police sirens.
submitted by /u/Gil-Gandel
[link] [comments]
What’s one thing you shouldn’t say at your boss’s funeral? -
Who’s thinking outside the box now, Kyle?
submitted by /u/Dirt_Empty
[link] [comments]