Daily-Dose

Contents

From New Yorker

From Vox

One argument in response to that disparity is that Twitter shouldn’t be a free, ad-supported business — that it should be something that people pay to use. But it’s easy to imagine that if Twitter cost money to use, most of Twitter’s users would decide that they’d rather spend their money on just about anything else. Which would mean the remaining, paying users would be talking to an even smaller audience — which would defeat the appeal that Twitter had for them in the first place.

But Twitter isn’t the world’s worst business. It’s just not a great one. Last year, it more or less turned that $5 billion in revenue into about $273 million in profit — a 5 percent margin.

That’s more profitable than, say, your average grocery store. But nothing like what public investors expect from a super-powerful, world-shaking Silicon Valley tech platform. But a private owner who isn’t consumed with turning Twitter into a profit center might be totally happy with that.

Whether Twitter employees — and in particular, its in-demand engineers — would be happy at a company that doesn’t offer the prospect of getting rich from stock options and grants would be another question. We will have plenty more in the coming days.

But, yeah: Sometimes billionaires buy things because they want to make money from them, and sometimes billionaires buy yachts, which won’t make them any money. And if you’re the world’s richest man, Twitter can be your $43 billion yacht.

Meanwhile, here’s the AI’s output when you ask for a flight attendant:

Courtesy of OpenAI

OpenAI is well aware that DALL-E 2 generates results exhibiting gender and racial bias. In fact, the examples above are from the company’s own “Risks and Limitations” document, which you’ll find if you scroll to the bottom of the main DALL-E 2 webpage.

OpenAI researchers made some attempts to resolve bias and fairness problems. But they couldn’t really root out these problems in an effective way because different solutions result in different trade-offs.

For example, the researchers wanted to filter out sexual content from the training data because that could lead to disproportionate harm to women. But they found that when they tried to filter that out, DALL-E 2 generated fewer images of women in general. That’s no good, because it leads to another kind of harm to women: erasure.

OpenAI is far from the only artificial intelligence company dealing with bias problems and trade- offs. It’s a challenge for the entire AI community.

“Bias is a huge industry-wide problem that no one has a great, foolproof answer to,” Miles Brundage, the head of policy research at OpenAI, told me. “So a lot of the work right now is just being transparent and upfront with users about the remaining limitations.”

Why release a biased AI model?

In February, before DALL-E 2 was released, OpenAI invited 23 external researchers to “red team” it — engineering-speak for trying to find as many flaws and vulnerabilities in it as possible, so the system could be improved. One of the main suggestions the red team made was to limit the initial release to only trusted users.

To its credit, OpenAI adopted this suggestion. For now, only about 400 people (a mix of OpenAI’s employees and board members, plus hand- picked academics and creatives) get to use DALL-E 2, and only for non-commercial purposes.

That’s a change from how OpenAI chose to deploy GPT-3, a text generator hailed for its potential to enhance our creativity. Given a phrase or two written by a human, it can add on more phrases that sound uncannily human-like. But it’s shown bias against certain groups, like Muslims, whom it disproportionately associates with violence and terrorism. OpenAI knew about the bias problems but released the model anyway to a limited group of vetted developers and companies, who could use GPT-3 for commercial purposes.

Last year, I asked Sandhini Agarwal, a researcher on OpenAI’s policy team, whether it makes sense that GPT-3 was being probed for bias by scholars even as it was released to some commercial actors. She said that going forward, “That’s a good thing for us to think about. You’re right that, so far, our strategy has been to have it happen in parallel. And maybe that should change for future models.”

The fact that the deployment approach has changed for DALL-E 2 seems like a positive step. Yet, as DALL-E 2’s “Risks and Limitations” document acknowledges, “even if the Preview itself is not directly harmful, its demonstration of the potential of this technology could motivate various actors to increase their investment in related technologies and tactics.”

And you’ve got to wonder: Is that acceleration a good thing, at this stage? Do we really want to be building and launching these models now, knowing it can spur others to release their versions even quicker?

Some experts argue that since we know there are problems with the models and we don’t know how to solve them, we should give AI ethics research time to catch up to the advances and address some of the problems, before continuing to build and release new tech.

Helen Ngo, an affiliated researcher with the Stanford Institute for Human-Centered AI, says one thing we desperately need is standard metrics for bias. A bit of work has been done on measuring, say, how likely certain attributes are to be associated with certain groups. “But it’s super understudied,” Ngo said. “We haven’t really put together industry standards or norms yet on how to go about measuring these issues” — never mind solving them.

OpenAI’s Brundage told me that letting a limited group of users play around with an AI model allows researchers to learn more about the issues that would crop up in the real world. “There’s a lot you can’t predict, so it’s valuable to get in contact with reality,” he said.

That’s true enough, but since we already know about many of the problems that repeatedly arise in AI, it’s not clear that this is a strong enough justification for launching the model now, even in a limited way.

The problem of misaligned incentives in the AI industry

Brundage also noted another motivation at OpenAI: competition. “Some of the researchers internally were excited to get this out in the world because they were seeing that others were catching up,” he said.

That spirit of competition is a natural impulse for anyone involved in creating transformative tech. It’s also to be expected in any organization that aims to make a profit. Being first out of the gate is rewarded, and those who finish second are rarely remembered in Silicon Valley.

As the team at Anthropic, an AI safety and research company, put it in a recent paper, “The economic incentives to build such models, and the prestige incentives to announce them, are quite strong.”

But it’s easy to see how these incentives may be misaligned for producing AI that truly benefits all of humanity. Rather than assuming that other actors will inevitably create and deploy these models, so there’s no point in holding off, we should ask the question: How can we actually change the underlying incentive structure that drives all actors?

The Anthropic team offers several ideas. One of their observations is that over the past few years, a lot of the splashiest AI research has been migrating from academia to industry. To run large-scale AI experiments these days, you need a ton of computing power — more than 300,000 times what you needed a decade ago — as well as top technical talent. That’s both expensive and scarce, and the resulting cost is often prohibitive in an academic setting.

So one solution would be to give more resources to academic researchers; since they don’t have a profit incentive to commercially deploy their models quickly the same way industry researchers do, they can serve as a counterweight. Specifically, countries could develop national research clouds to give academics access to free, or at least cheap, computing power; there’s already an existing example of this in Compute Canada, which coordinates access to powerful computing resources for Canadian researchers.

The Anthropic team also recommends exploring regulation that would change the incentives. “To do this,” they write, “there will be a combination of soft regulation (e.g., the creation of voluntary best practices by industry, academia, civil society, and government), and hard regulation (e.g., transferring these best practices into standards and legislation).”

Although some good new norms have been adopted voluntarily within the AI community in recent years — like publishing “model cards,” which document a model’s risks, as OpenAI did for DALL-E 2 — the community hasn’t yet created repeatable standards that make it clear how developers should measure and mitigate those risks.

“This lack of standards makes it both more challenging to deploy systems, as developers may need to determine their own policies for deployment, and it also makes deployments inherently risky, as there’s less shared knowledge about what ‘safe’ deployments look like,” the Anthropic team writes. “We are, in a sense, building the plane as it is taking off.”

  1. University of Texas at Austin (2016), the Court echoed the idea that the desire to increase “‘student body diversity’ … is, in substantial measure, an academic judgment to which some, but not complete, judicial deference is proper.” While racial quotas and the like are forbidden, schools have some leeway to set admissions standards that foster diversity.

    Fisher also held that race-neutral methods of promoting diversity are preferred to race-conscious methods. Indeed, if a school wishes to use race-conscious admissions standards, it must first prove that a race-neutral method “would not promote its interest in the educational benefits of diversity ‘about as well and at tolerable administrative expense.’”

    Under current law, in other words, public schools and universities have a legitimate interest in fostering racial diversity, and they may intentionally design their admissions standards to increase the likelihood that students from underrepresented racial groups are admitted. Schools with race-conscious admissions programs may struggle to justify those programs in court, but the Supreme Court has historically treated race-neutral programs intended to enhance diversity as benign.

    But there’s no guarantee that the Court will continue to view such race-neutral programs as acceptable. Fisher was a 4-3 decision, with retired Justice Anthony Kennedy writing the majority opinion, and the late Justice Ruth Bader Ginsburg joining the majority. Both Kennedy and Ginsburg were replaced by archconservative Trump appointees. (The reason why only seven justices decided Fisher is that the case was handed down after Justice Antonin Scalia’s death created a vacancy on the Court, and Justice Elena Kagan was recused.)

    The Court’s current Republican supermajority has shown extraordinary hostility toward laws intended to promote racial equality, and it is well to the right of an earlier generation of Republicans, like former President Bush. In 2006, for example, Bush signed legislation reauthorizing the Voting Rights Act, which forbids race discrimination in elections, but the current Supreme Court has since largely dismantled this historic piece of civil rights legislation.

    It’s not hard to imagine, in other words, that the Court’s current majority could hold that any policy that is motivated by a desire to increase opportunities for underrepresented racial minorities is constitutionally suspect.

From The Hindu: Sports

From The Hindu: National News

From BBC: Europe

From Ars Technica

From Jokes Subreddit