What Happened to the Washington Post? - After a decade of growth, the paper is laying off staff and was reportedly on track to lose money last year. Its publisher and C.E.O. says it’s all part of a bold strategy. - link
The Democratic Party’s Political Gift to Ron DeSantis - Republicans’ sustained and successful courting of Latino voters in South Florida could be a road map for the G.O.P. in 2024. - link
Two Supreme Court Cases That Could Break the Internet - A cornerstone of life online has been that platforms are not responsible for content posted by users. What happens if that immunity goes away? - link
The Police Folklore That Helped Kill Tyre Nichols - A 1992 study claims that officers who show weakness are more likely to be killed. Law-enforcement culture has never recovered. - link
Israel’s Anti-Democratic Practices Against Palestinians Are Infecting Its Political System - Rising violence is drawing new attention to the alliance that Benjamin Netanyahu struck with the far right to return to power. - link
To rebuild the movement after the fall of Sam Bankman-Fried, EAs will need to embrace a humbler, more decentralized approach.
In May of this past year, I proclaimed on a podcast that “effective altruism (EA) has a great hunger for and blindness to power. That is a dangerous combination. Power is assumed, acquired, and exercised, but rarely examined.”
Little did I know at the time that Sam Bankman-Fried, — a prodigy and major funder of the EA community, who claimed he wanted to donate billions a year— was engaged in making extraordinarily risky trading bets on behalf of others with an astonishing and potentially criminal lack of corporate controls. It seems that EAs, who (at least according to ChatGPT) aim “to do the most good possible, based on a careful analysis of the evidence,” are also comfortable with a kind of recklessness and willful blindness that made my pompous claims seem more fitting than I had wished them to be.
By that autumn, investigations revealed that Bankman-Fried’s company assets, his trustworthiness, and his skills had all been wildly overestimated, as his trading firms filed for bankruptcy and he was arrested on criminal charges. His empire, now alleged to have been built on money laundering and securities fraud, had allowed him to become one of the top players in philanthropic and political donations. The disappearance of his funds and his fall from grace leaves behind a gaping hole in the budget and brand of EA. (Disclosure: In August 2022, SBF’s philanthropic family foundation, Building a Stronger Future, awarded Vox’s Future Perfect a grant for a 2023 reporting project. That project is now on pause.)
People joked online that my warnings had “aged like fine wine,” and that my tweets about EA were akin to the visions of a 16th-century saint. Less flattering comments pointed out that my assessment was not specific enough to be passed as divine prophecy. I agree. Anyone watching EA becoming corporatized over the last years (the Washington Post fittingly called it “Altruism, Inc.” ) would have noticed them becoming increasingly insular, confident, and ignorant. Anyone would expect doom to lurk in the shadows when institutions turn stale.
On Halloween this past year, I was hanging out with a few EAs. Half in jest, someone declared that the best EA Halloween costume would clearly be a crypto-crash — and everyone laughed wholeheartedly. Most of them didn’t know what they were dealing with or what was coming. I often call this epistemic risk: the risk that stems from ignorance and obliviousness, the catastrophe that could have been avoided, the damage that could have been abated, by simply knowing more. Epistemic risks contribute ubiquitously to our lives: We risk missing the bus if we don’t know the time, we risk infecting granny if we don’t know we carry a virus. Epistemic risk is why we fight coordinated disinformation campaigns and is the reason countries spy on each other.
Still, it is a bit ironic for EAs to have chosen ignorance over due diligence. Here are people who (smugly at times) advocated for precaution and preparedness, who made it their obsession to think about tail risks, and who doggedly try to predict the future with mathematical precision. And yet, here they were, sharing a bed with a gambler against whom it was apparently easy to find allegations of shady conduct. The affiliation was a gamble that ended up putting their beloved brand and philosophy at risk of extinction.
How exactly did well-intentioned, studious young people once more set out to fix the world only to come back with dirty hands? Unlike others, I do not believe that longtermism — the EA label for caring about the future, which particularly drove Bankman-Fried’s donations — or a too-vigorous attachment to utilitarianism is the root of their miscalculations. A postmortem of the marriage between crypto and EA holds more generalizable lessons and solutions. For one, the approach of doing good by relying on individuals with good intentions — a key pillar of EA — appears ever more flawed. The collapse of FTX is a vindication of the view that institutions, not individuals, must shoulder the job of keeping excessive risk-taking at bay. Institutional designs must shepherd safe collective risk-taking and help navigate decision-making under uncertainty.
The signature logo of EA is a bleedingly clichéd heart in a lightbulb. Their brand portrays their unique selling point of knowing how to take risks and do good. Risk mitigation is indeed partly a matter of knowledge. Understanding which catastrophes might occur is half the battle. Doing Good Better — the 2015 book on the movement by Will MacAskill, one of EA’s founding figures — wasn’t only about doing more. It was about knowing how to do it and to therefore squeeze more good from every unit of effort.
The public image of EA is that of a deeply intellectual movement, attached to the University of Oxford brand. But internally, a sense of epistemic decline became palatable over recent years. Personal connections and a growing cohesion around an EA party line had begun to shape the marketplace of ideas.
Pointing this out seemed paradoxically to be met with appraisal, agreement, and a refusal to do much about it. Their ideas, good and bad, continued to be distributed, advertised, and acted upon. EA donors, such as Open Philanthropy and Bankman-Fried, funded organizations and members in academia, like the Global Priorities Institute or the Future of Humanity Institute; they funded think tanks, such as the Center for Security and Technology or the Centre for Long-Term Resilience; and journalistic outlets such as Asterisk, Vox Future Perfect, and, ironically, the Law & Justice Journalism project. It is surely effective to pass EA ideas across those institutional barriers, which are usually intended to restrain favors and biases. Yet such approaches sooner or later incur intellectual rigor and fairness as collateral damage.
Disagreeing with some core assumptions in EAs became rather exhausting. By 2021, my co-author Luke Kemp of the Centre for the Study of Existential Risk at the University of Cambridge and I thought that much of the methodology used in the field of existential risk — a field funded, populated, and driven by EAs — made no sense. So we attempted to publish an article titled “Democratising Risk,” hoping that criticism would give breathing space to alternative approaches. We argued that the idea of a good future as envisioned in Silicon Valley might not be shared across the globe and across time, and that risk had a political dimension. People reasonably disagree on what risks are worth taking, and these political differences should be captured by a fair decision process.
The paper proved to be divisive: Some EAs urged us not to publish, because they thought the academic institutions we were affiliated with might vanish and that our paper could prevent vital EA donations. We spent months defending our claims against surprisingly emotional reactions from EAs, who complained about our use of the term “elitist” or that our paper wasn’t “loving enough.” More concerningly, I received a dozen private messages from EAs thanking me for speaking up publicly or admitting, as one put it: “I was too cowardly to post on the issue publicly for fear that I will get ‘canceled.’”
Maybe I should not have been surprised about the pushback from EAs. One private message to me read: “I’m really disillusioned with EA. There are about 10 people who control nearly all the ‘EA resources.’ However, no one seems to know or talk about this. It’s just so weird. It’s not a disaster waiting to happen, it’s already happened. It increasingly looks like a weird ideological cartel where, if you don’t agree with the power holders, you’re wasting your time trying to get anything done.”
I would have expected a better response to critique from a community that, as one EA aptly put it to me, “incessantly pays epistemic lip service.” EAs talk of themselves in third person, run forecasting platforms, and say they “update” rather than “change” their opinions. While superficially obsessed with epistemic standards and intelligence (an interest that can take ugly forms), real expertise is rare among this group of smart but inexperienced young people who only just entered the labor force. For reasons of “epistemic modesty” or a fear of sounding stupid, they often defer to high-ranking EAs as authority. Doubts might reveal that they just didn’t understand the ingenuous argumentation for fate determined by technology. Surely, EAs must have thought, the leading brains of the movement will have thought through all the details?
Last February, I proposed to MacAskill — who also works as an associate professor at Oxford, where I’m a student — a list of measures that I thought could minimize risky and unaccountable decision-making by leadership and philanthropists. Hundreds of students across the world associate themselves with the EA brand, but consequential and risky actions taken under its banner — such as the well-resourced campaign behind MacAskill’s book What We Owe the Future, attempts to help Musk buy Twitter, or funding US political campaigns — are decided upon by the few. This sits well neither with the pretense of being a community nor with healthy risk management.
Another person on the EA forum messaged me saying: “It is not acceptable to directly criticize the system, or point out problems. I tried and someone decided I was a troublemaker that should not be funded. […] I don’t know how to have an open discussion about this without powerful people getting defensive and punishing everyone involved. […] We are not a community, and anyone who makes the mistake of thinking that we are, will get hurt.”
My suggestions to MacAskill ranged from modest calls to incentivize disagreement with leaders like him to conflict of interest reporting and portfolio diversifications away from EA donors. They included incentives for whistleblowing and democratically controlled grant-making, both of which likely would have reduced EA’s disastrous risk exposure to Bankman-Fried’s bets. People should have been incentivized to warn others. Enforcing transparency would have ensured that more people could have known about the red flags that were signposted around his philanthropic outlet.
These are standard measures against misconduct. Fraud is uncovered when regulatory and competitive incentives (be it rivalry, short-selling, or political assertiveness) are tuned to search for it. Transparency benefits risk management, and whistleblowing plays an essential role in historic discoveries of misconduct by big bureaucratic entities.
Institutional incentive-setting is basic homework for growing organizations, and yet, the apparent intelligentsia of altruism seems to have forgotten about it. Maybe some EAs, who fancied themselves “experts in good intention,” thought such measures should not apply to them.
We also know that standard measures are not sufficient. Enron’s conflict of interest reporting, for instance, was thorough and thoroughly evaded. They would certainly not be sufficient for the longtermist project, which, if taken seriously, would mean EAs trying to shoulder risk management for all of us and our ancestors. We should not be happy to give them this job as long as their risk estimates are done in insular institutions with epistemic infrastructures that are already beginning to crumble. My proposals and research papers broadly argued that increasing the number of people making important decisions will on average reduce risk, both to the institution of EA and to those affected by EA policy. The project of managing global risk is — by virtue of its scale — tied to using distributed, not concentrated, expertise.
After I spent an hour in MacAskill’s office arguing for measures that would take arbitrary decision power out of the hands of the few, I sent one last pleading (and inconsequential) email to him and his team at the Forethought Foundation, which promotes academic research on global risk and priorities, and listed a few steps required to at least test the effectiveness and quality of decentralized decision-making — especially in respect to grant-making.
My academic work on risk assessments had long been interwoven with references to promising ideas coming out of Taiwan, where the government has been experimenting with online debating platforms to improve policymaking. I admired the works of scholars, research teams, tools, organizations, and projects, which amassed theory, applications, and data showing that more and more diverse groups of people tend to make better choices. Those claims have been backed by hundreds of successful experiments on inclusive decision-making. Advocates had more than idealism — they had evidence that scaled and distributed deliberations provided more knowledge-driven answers. They held the promise of a new and higher standard for democracy and risk management. EA, I thought, could help test how far the promise would go.
I was entirely unsuccessful in inspiring EAs to implement any of my suggestions. MacAskill told me that there was quite a diversity of opinion among leadership. EAs patted themselves on the back for running an essay competition on critiques against EA, left 253 comments on my and Luke Kemp’s paper, and kept everything that actually could have made a difference just as it was.
Sam Bankman-Fried may have owned a $40 million penthouse, but that kind of wealth is an uncommon occurrence within EA. The “rich” in EA don’t drive faster cars, and they don’t wear designer clothes. Instead, they are hailed as being the best at saving unborn lives.
It makes most people happy to help others. This altruistic inclination is dangerously easy to repurpose. We all burn for an approving hand on our shoulder, the one that assures us that we are doing good by our peers. The question is, how badly do we burn for approval? What will we burn to the ground to attain it?
If your peers declare “impact” as the signpost of being good and worthy, then your attainment of what looks like ever more “good-doing” is the locus of self-enrichment. Being the best at“good-doing” is the status game. But once you have status, your latest ideas of good-doing define the new rules of the status game.
EAs with status don’t get fancy, shiny things, but they are told that their time is more precious than others. They get to project themselves for hours on the 80,000 Hours podcast, their sacrificial superiority in good-doing is hailed as the next level of what it means to be “value-aligned,” and their often incomprehensible fantasies about the future are considered too brilliant to fully grasp. The thrill of beginning to believe that your ideas might matter in this world is priceless and surely a little addictive.
We do ourselves a disservice by dismissing EA as a cult. Yes, they drink liquid meals, and do “circling,” a kind of collective, verbalized meditation. Most groups foster group cohesion. But EA is a particularly good example that shows how our idea about what it means to be a good person can be changed. It is a feeble thing, so readily submissive to and forged by raw status and power.
Doing right by your EA peers in 2015 meant that you check out a randomized controlled trial before donating 10 percent of your student budget to combating poverty. I had always refused to assign myself the cringe-worthy label of “effective altruist,” but I too had my few months of a love affair with what I naively thought was my generation’s attempt to apply science to “making the world a better place.” It wasn’t groundbreaking — just commonsensical.
But this changed fast. In 2019, I was leaked a document circulating at the Centre for Effective Altruism, the central coordinating body of the EA movement. Some people in leadership positions were testing a new measure of value to apply to people: a metric called PELTIV, which stood for “Potential Expected Long-Term Instrumental Value.” It was to be used by CEA staff to score attendees of EA conferences, to generate a “database for tracking leads” and identify individuals who were likely to develop high “dedication” to EA — a list that was to be shared across CEA and the career consultancy 80,000 Hours. There were two separate tables, one to assess people who might donate money and one for people who might directly work for EA.
Individuals were to be assessed along dimensions such as “integrity” or “strategic judgment” and “acting on own direction,” but also on “being value-aligned,” “IQ,” and “conscientiousness.” Real names, people I knew, were listed as test cases, and attached to them was a dollar sign (with an exchange rate of 13 PELTIV points = 1,000 “pledge equivalents” = 3 million “aligned dollars”).
What I saw was clearly a draft. Under a table titled “crappy uncalibrated talent table,” someone had tried to assign relative scores to these dimensions. For example, a candidate with a normal IQ of 100 would be subtracted PELTIV points, because points could only be earned above an IQ of 120. Low PELTIV value was assigned to applicants who worked to reduce global poverty or mitigate climate change, while the highest value was assigned to those who directly worked for EA organizations or on artificial intelligence.
The list showed just how much what it means to be “a good EA” has changed over the years. Early EAs were competing for status by counting the number of mosquito nets they had funded out of their own pocket; later EAs competed on the number of machine learning papers they co-authored at big AI labs.
When I confronted the instigator of PELTIV, I was told the measure was ultimately discarded. Upon my request for transparency and a public apology, he agreed the EA community should be informed about the experiment. They never were. Other metrics such as “highly engaged EA” appear to have taken its place.
All metrics are imperfect. But a small error between a measure of that which is good to do and that which is actually good to do suddenly makes a big difference fast if you’re encouraged to optimize for the proxy. It’s the difference between recklessly sprinting or cautiously stepping in the wrong direction. Going slow is a feature, not a bug.
It’s curious that effective altruism — the community that was most alarmist about the dangers of optimization and bad metrics in AI — failed to immunize itself against the ills of optimization. Few pillars in EA stood as constant as the maxim to maximize impact. The direction and goalposts of impact kept changing, while the attempt to increase velocity, to do more for less, to squeeze impact from dollars, remained. In the words of Sam Bankman-Fried: “There’s no reason to stop at just doing well.”
The recent shift to longtermism has gotten much of the blame for EA’s failures, but one does not need to blame longtermism to explain how EA, in its effort to do more good, might unintentionally do some bad. Take their first maxim and look no further: Optimizing for impact provides no guidance on how one makes sure that this change in the world will actually be positive. Running at full speed toward a target that later turns out to have been a bad idea means you still had impact — just not the kind you were aiming for. The assurance that EA will have positive impact rests solely on the promise that their direction of travel is correct, that they have better ways of knowing what the target should be. Otherwise, they are optimizing in the dark.
That is precisely why epistemic promise is baked into the EA project: By wanting to do more good on ever bigger problems, they must develop a competitive advantage in knowing how to choose good policies in a deeply uncertain world. Otherwise, they simply end up doing more, which inevitably includes more bad. The success of the project was always dependent on applying better epistemic tools than could be found elsewhere.
Longtermism and expected value calculations merely provided room for the measure of goodness to wiggle and shape-shift. Futurism gives rationalization air to breathe because it decouples arguments from verification. You might, by chance, be right on how some intervention today affects humans 300 years from now. But if you were wrong, you’ll never know — and neither will your donors. For all their love of Bayesian inference, their endless gesturing at moral uncertainty, and their norms of superficially signposting epistemic humility, EAs became more willing to venture into a far future where they were far more likely to end up in a space so vast and unconstrained that the only feedback to update against was themselves.
I am sympathetic to the type of greed that drives us beyond wanting to be good to instead be certain that we are good. Most of us have it in us, I suspect. The uncertainty over being good is a heavy burden to carry. But a highly effective way to reduce the psychological dissonance of this uncertainty is to minimize your exposure to counter-evidence, which is another way of saying that you don’t hang out with people that EAs call “non-aligned.” Homogeneity is the prize they pay to escape the discomfort of an uncertain moral landscape.
There is a better way.
It should be the burden of institutions, not individuals, to face and manage the uncertainty of the world. Risk reduction in a complex world will never be done by people cosplaying perfect Bayesians. Good reasoning is not about eradicating biases, but about understanding which decision-making procedures can find a place and function for our biases. There is no harm in being wrong: It’s a feature, not a bug, in a decision procedure that balances your bias against an opposing bias. Under the right conditions, individual inaccuracy can contribute to collective accuracy.
I will not blame EAs for having been wrong about the trustworthiness of Bankman-Fried, but I will blame them for refusing to put enough effort into constructing an environment in which they could be wrong safely. Blame lies in the audacity to take large risks on behalf of others, while at the same time rejecting institutional designs that let ideas fail gently.
EA contains at least some ideological incentive to let epistemic risk slide. Institutional constraints, such as transparency reports, external audits, or testing big ideas before scaling, are deeply inconvenient for the project of optimizing toward a world free of suffering.
And so they daringly expanded a construction site of an ideology, which many knew to have gaping blind spots and an epistemic foundation that was beginning to tilt off balance. They aggressively spent large sums publicizing half-baked policy frameworks on global risk, aimed to teach the next generation of high school students, and channeled hundreds of elite graduates to where they thought they needed them most. I was almost one of them.
I was in my final year as a biology undergraduate in 2018, when money was still a constraint, and a senior EA who had been a speaker at a conference I had attended months prior suggested I should consider relocating across the Atlantic to trade cryptocurrency for the movement and its causes. I loved my degree, but it was nearly impossible not to be tempted by the prospects: Trading, they said, could allow me personally to channel millions of dollars into whatever causes I cared about.
I agreed to be flown to Oxford, to meet a person named Sam Bankman-Fried, the energetic if distracted-looking founder of a new company called Alameda. All interviewees were EAs, handpicked by a central figure in EA.
The trading taster session on the following day was fun at first, but Bankman-Fried and his team were giving off strange vibes. In between ill-prepared showcasing and haphazard explanations, they would go to sleep for 20 minutes or gather semi-secretly in a different room to exchange judgments about our performance. I felt like a product, about to be given a sticker with a PELTIV score. Personal interactions felt as fake as they did during the internship I once completed at Goldman Sachs — just without the social skills. I can’t remember anyone from his team asking me who I was, and halfway through the day I had fully given up on the idea of joining Alameda. I was rather baffled that EAs thought I should waste my youth in this way.
Given what we now know about how Bankman-Fried led his companies, I am obviously glad to have followed my vaguely negative gut feeling. I know many students whose lives changed dramatically because of EA advice. They moved continents, left their churches, their families, and their degrees. I know talented doctors and musicians who retrained as software engineers, when EAs began to think working on AI could mean your work might matter in “a predictable, stable way for another ten thousand, a million or more years.”
My experience now illustrates what choices many students were presented with and why they were hard to make: I lacked rational reasons to forgo this opportunity, which seemed daring or, dare I say, altruistic. Education, I was told, could wait, and in any case, if timelines to achieving artificial general intelligence were short, my knowledge wouldn’t be of much use.
In retrospect, I am furious about the presumptuousness that lay at the heart of leading students toward such hard-to-refuse, risky paths. Tell us twice that we are smart and special and we, the young and zealous, will be in on your project.
I care rather little about the death or survival of the so-called EA movement. But the institutions have been built, the believers will persist, and the problems they proclaim to tackle — be it global poverty, pandemics, or nuclear war — will remain.
For those inside of EA who are willing to look to new shores: Make the next decade in EA be that of the institutional turn. The Economist has argued that EAs now “need new ideas.” Here’s one: EA should offer itself as the testing ground for real innovation in institutional decision-making.
It seems rather unlikely indeed that current governance structures alone will give us the best shot at identifying policies that can navigate the highly complex global risk landscape of this century. Decision-making procedures should be designed such that real and distributed expertise can affect the final decision. We must identify what institutional mechanisms are best suited to assessing and choosing risk policies. We must test what procedures and technologies can help aggregate biases to wash out errors, incorporate uncertainty, and yield robust epistemic outcomes. The political nature of risk-taking must be central to any steps we take from here.
Great efforts, like the establishment of a permanent citizen assembly in Brussels to evaluate climate risk policies or the use of machine learning to find policies that more people agree with, are already ongoing. But EAs are uniquely placed to test, tinker, and evaluate more rapidly and experimentally: They have local groups across the world and an ecosystem of independent, connected institutions of different sizes. Rigorous and repeated experimentation is the only way in which we can gain clarity about where and when decentralized decision-making is best regulated by centralized control.
Researchers have amassed hundreds of design options for procedures that vary in when, where, and how they elicit experts, deliberate, predict, and vote. There are numerous available technological platforms, such as loomio, panelot, decidim, rxc voice, or pol.is, that facilitate online deliberations at scale and can be adapted to specific contexts. New projects, like the AI Objectives Institute or the Collective Intelligence Project, are brimming with startup energy and need a user base to pilot and iterate with. Let EA groups be a lab for amassing empirical evidence behind what actually works.
Instead of lecturing students on the latest sexy cause area, local EA student chapters could facilitate online deliberations on any of the many outstanding questions about global risk and test how the integration of large language models affects the outcome of debates. They could organize hackathons to extend open source deliberation software and measure how proposed solutions changed relative to the tools that were used. EA think tanks, such as the Centre for Long-Term Resilience, could run citizen assemblies on risks from automation. EA career services could err on the side of providing information rather than directing graduates: 80,000 Hours could manage an open source wiki on different jobs, available for experts in those positions to post fact-checked, diverse, and anonymous advice. Charities like GiveDirectly could build on their recipient feedback platform and their US disaster relief program, to facilitate an exchange of ideas between beneficiaries about governmental emergency response policies that might hasten recovery.
Collaborative, not individual, rationality is the armor against a gradual and inevitable tendency of becoming blind to an unfolding catastrophe. The mistakes made by EAs are surprisingly mundane, which means that the solutions are generalizable and most organizations will benefit from the proposed measures.
My article is clearly an attempt to make EA members demand they be treated less like sheep and more like decision-makers. But it is also a question to the public about what we get to demand of those who promise to save us from any evil of their choosing. Do we not get to demand that they fulfill their role, rather than rule?
The answers will lie in data. Open Philanthropy should fund a new organization for research on epistemic mechanism design. This central body should receive data donations from a decade of epistemic experimentalism in EA. It would be tasked with making this data available to researchers and the public in a form that is anonymized, transparent, and accessible. It should coordinate, host, and connect researchers with practitioners and evaluate results across different combinations, including variable group sizes, integrations with discussion and forecasting platforms, and expert selections. It should fund theory and software development, and the grants it distributes could test distributed grant-giving models.
Reasonable concerns might be raised about the bureaucratization that could follow the democratization of risk-taking. But such worries are no argument against experimentation, at least not until the benefits of outsourced and automated deliberation procedures have been exhausted. There will be failures and wasted resources. It is an inevitable feature of applying science to doing anything good. My propositions offer little room for the delusions of optimization, instead aiming to scale and fail gracefully. Procedures that protect and foster epistemic collaboration are not a “nice to have.” They are a fundamental building block to the project of reducing global risks.
One does not need to take my word for it: The future of institutional, epistemic mechanism designs will tell us how exactly I am wrong. I look forward to that day.
Carla Zoe Cremer is a doctoral student at the University of Oxford in the department of psychology, with funding from the Future of Humanity Institute (FHI). She studied at ETH Zurich and LMU in Munich and was a Winter Scholar at the Centre for the Governance of AI, an affiliated researcher at the Centre for the Study of Existential Risk at the University of Cambridge, a research scholar (RSP) at the FHI in Oxford, and a visitor to the Leverhulme Centre for the Future of Intelligence in Cambridge.
Facebook, Google, and Amazon are trying to get their groove back.
When Meta’s head of people, Lori Goler, posted a memo to the company’s internal employee message board last summer asking employees to work with “increased intensity,” many workers pushed back.
In internal comments Recode reviewed, some employees took issue with the idea that they weren’t working hard enough already. Others felt the problems weren’t with the rank and file, but with management and the company’s massive size and bureaucratic structure, which some said made it hard to move quickly on daily work or to give feedback to leadership. Another complaint was simply that some Meta employees didn’t want to do more work for the same amount of money. Because many Meta employees are paid in company stock, which has declined precipitously in the past year, the workers would actually be doing more for less.
The real topic at hand was whether a tech giant can or should try to behave like a startup.
Massive technology companies like Meta used to be startups, of course. But that was decades ago when they were much smaller and more agile, and when they were making products that had infinite possibilities for profit. Now these companies are asking their employees to work with “increased intensity” without any near-term payoff — in other words, to act like eager and ambitious startup workers — but in a vastly different scenario. Meta, Alphabet, and Amazon are huge and highly profitable companies, however, contending with antitrust regulators for being too big and powerful, rather than too small and scrappy. Their employees are being asked to work harder or face layoffs not because their companies aren’t making any money, but because they’re not making it fast enough.
This kind of messaging is emerging as America’s biggest tech companies are starting to show their age. Meta, formerly known as Facebook, is old enough to vote. Alphabet, formerly Google, is in its mid-20s, and Amazon will soon enter its fourth decade of operations. At the same time, the rapid growth that has historically defined these companies has slowed. Wall Street has taken notice: The combined market caps of Meta, Google, and Amazon have declined $1.5 trillion in the last year.
As one Googler put it in an interview, “There was a time when Google was young and hungry. But we haven’t been young or hungry for quite some time.”
Leadership at these three companies is now doing its best to conjure the good old days — the scrappy days. Sundar Pichai, CEO of both Alphabet and Google, is trying to remind people that Google was once “small and scrappy,” telling workers that working hard and having fun “shouldn’t always equate to money.” The company laid off 12,000 people at the end of January. At Meta, which let 11,000 employees go in November, CEO Mark Zuckerberg has said he wants workers to “return to a scrappier culture.” Meanwhile, Amazon CEO Andy Jassy told Amazon employees this month to be “inventive, resourceful, and scrappy in this time when we’re not hiring expansively and eliminating some roles,” following massive corporate layoffs at the end of last year, with more to come.
“Any company that wants to have a lasting impact must practice disciplined prioritization and work with a high level of intensity to reach goals,” Meta told Recode in a response to requests for comment for this article. “The reports about these efforts are consistent with this focus and what we’ve already shared publicly about our operating style.”
Google and Amazon did not respond to requests for comment for this story.
The survival of these companies isn’t in question. What’s unclear is which changes they’ll need to make in order to grow and create world-changing products, as they have done in years past. Inevitably, the moves these companies make as they try to shift their businesses and culture will have huge ramifications that extend far beyond the technology industry, as tech companies tend to influence the behavior of corporate America in general.
For now, layoffs look like the biggest course correction in Silicon Valley. On one hand, getting rid of thousands of employees is a form of “right-sizing” for these companies, in which they are making amends for overhiring during the pandemic. On the other, asking remaining employees to get more done with fewer resources can be demoralizing and could drive away some of the best employees.
“I don’t think remaining a very large company and then saying, ‘We’re going into startup mode,’ is going to work,” tech historian and University of Washington professor Margaret O’Mara said. “You’re just going to have unhappy workers because they’re working really hard and they’re not seeing the upside.”
It probably doesn’t help that many tech companies are also scaling back on their most over-the-top perks. Google is cutting down on travel and recently laid off nearly 30 in-house massage therapists. Meta axed its complimentary laundry service. Across the board, there’s less free food to go around.
But Drew Pascarella, senior finance lecturer at Cornell Business School, thinks the startup messaging could ultimately have a useful effect in helping to break the negative news cycle around layoffs and creating a more positive atmosphere for remaining employees.
“They’re using this to positively evoke the yesteryear of when it was fun and cool to work for tech in the Valley,” Pascarella said. He added that the message isn’t without merit, in that these companies still are innovative to an extent. They also have subdivisions that are still designed to behave like startups.
That said, tech giants are cutting back on moonshots, those ambitious R&D projects that typically don’t make much money. Google axed a neural network effort that modeled the brains of flies, made cuts to its innovation unit, and even laid off some workers in AI, which the company has said is still a “key” investment area. Amazon is scaling back development of Alexa, which captured our collective imagination by making talking to machines mainstream but was also losing gobs of cash. Meta is perhaps the odd one out since it’s doubling down on its biggest moonshot, the metaverse, but the company has axed other major projects, like its Portal video chat hardware.
All these cuts and layoffs allow companies to save money in the short term, and the stock markets have responded positively. But too many cuts could potentially jeopardize their growth in the future. They don’t know if a money-losing line item today might be the next Google Ads or Instagram. These changes also mark a distinct change from the companies’ startup roots, where potential growth was prioritized over profitability.
We talked to half a dozen employees at Google, Meta, and Amazon, whom we granted anonymity so as not to jeopardize their employment, as well as tech industry experts about how these companies are trying to right their ships and whether it can work. What happens next depends on how the companies execute these changes as well as how employees and investors respond — not to mention how innovative these companies can be when this is all over.
To some extent, tech workers have accepted certain kinds of cuts as reasonable by workers. Opulent holiday celebrations, rampant swag, and omnipresent food were always considered a bit over the top even compared to some of the more indulgent startups. (As one Google employee put it, “Coming in from smaller shops, I thought, ‘Man, these Google people are really spoiled.”) So it was no surprise when Google restricted employee travel, including to social events or in-person events with virtual options. Few were shocked when Meta limited the number of free restaurants it offers at its main campus in Menlo Park.
There’s also no doubt that the rampant hiring during the pandemic left a bit of headcount bloat that these companies could afford to lose. Amazon nearly doubled its employee numbers to 1.5 million in the third quarter of 2022, up from 800,000 in 2019. Meta also nearly doubled its employees from 45,000 in 2019 to 87,000 in that time. Google had grown its headcount more than 50 percent since the end of 2019 to 187,000 in September 2022.
The problem, though, is that layoffs don’t necessarily save money. In conjunction with asking workers to work harder, they can also have unintended negative consequences.
“I think people are afraid in a way that I have not experienced in the tech industry in a very long time,” another Google employee said. While that can motivate people to work harder and to prove their projects are worthwhile to the company’s bottom line, the employee said it can also drive unwanted behaviors, like workers fighting “turf wars” over high-priority projects. The employee added that, in the past, teams might share code or combine feature requests when they found overlap in their work. That’s no longer the case. Instead, one team won’t wait for another or share code. They might, however, start talking about the deficiencies of the other team.
There’s also the distinct possibility that asking remaining workers to work harder and be more efficient won’t work but instead just demoralize them.
That’s how things have panned out at Google so far. For a while, the fact that the company had avoided major layoffs had been a point of pride for its workers, one that suggested they were valued employees at a well-run company. Over the holidays, workers posted memes on the company’s internal communications thanking Pichai for not laying off workers and, by extension, not being like seemingly every other tech company.
Last week’s layoffs changed things. Google employees struggled to find a consistent rationale for layoffs, as they seemed to span teams, tenures, and high performers.
“No one knows what’s stable now,” a Google software engineer told Recode after the layoffs. “Morale is low.” While layoffs might cause some people to work harder, he speculated that many others might feel demotivated and look for other work, given the breadth of the layoffs. “Their view of it is, ‘I don’t know if working hard means I keep my job. I don’t understand why the layoffs happened the way they did. My colleague over here was amazing. And they’re gone.’”
Layoffs at Meta also seemed to have had a negative impact on employees, some of whom resent the idea that they are expected to now work harder.
“There’s no way I’m staying at Meta if I’m told to work startup hours,” one Meta employee told Recode.
David Yoffie, Harvard Business School professor and longtime tech board member at companies including Intel and HTC, says that the language around working harder partly stems from Elon Musk’s high-profile push for his Twitter employees to be “extremely hardcore” and a general feeling in Silicon Valley that the “intensity which characterized the early days is gone.” It amounts to little more than rhetoric, he said.
“These companies are too big for these kinds of short-term rants to have a big impact,” Yoffie explained. “Preaching you need to work harder to 70,000 people does not work.” Even worse, such cuts can cause some of the best talent to leave, ultimately harming the company’s prospects. “Whenever companies start to go down this route, the very best employees, who are going to get hired even in a bad environment, end up moving, and that weakens the company as a whole,” he added.
But some Silicon Valley executives are energized by the cuts. For too long during tech’s boom cycle, the thinking goes, big companies hired endlessly. Now that the tech economy has tightened, it’s a good time for executives to “cut that fat,” as one former Meta manager told Recode in September. That feeling might be shared by leaders at Google, too.
“Google — like any large company — has parts where people work incredibly hard, but there’s large parts of the company where it’s just a very comfortable place to be,” said Laszlo Bock, Google’s former head of HR and co-founder of the HR tech company Humu. With the economic downturn, Bock said, there’s an opportunity for management to get rid of longtime employees who are highly paid and perceived to be a little too comfortable.
Employees and experts are more ambivalent about how these companies are now cutting moonshots. That’s largely because it can be hard to tell in the early stages of development what will be the next big thing and what’s just a waste of time and money. A former Amazon employee told Recode that there has been less discipline around cutting products that don’t actually meet customer needs, referring to how the company quickly ceased production on its Fire Phone. Another said that since Jassy became CEO in 2021, the company has been reticent to invest in or even consider moonshot ideas.
Several Google employees said that the company has long kept unprofitable projects going beyond their usefulness, and that getting rid of some of them might be for the best. Google is famous for trying unexpected new things. Some of these efforts have turned into profitable products, like Gmail, while others have helped prop up Google’s reputation for innovation. The fear is that by getting rid of these risky side projects, the company might miss the next big thing. There is also a fear that something has changed at the company, since few of these projects have panned out in recent years.
“Why isn’t it working? What is the special sauce that we used to have when we were doing Maps, and Google Docs, and Sheets and Cloud even?” one Google employee asked.
It’s tough to figure out what’s next for Big Tech companies, since their scale makes it difficult to draw historical comparisons. Do they become Microsoft and go into something like cloud computing? Or do they fade from glory like Xerox or RCA, companies that made some of the biggest technological innovations of their time but failed to shepherd that innovative spirit into the next era?
To stay on the cutting edge, tech giants are leaning into their own visions of the future. Meta is going all in on the metaverse. Google is focusing its efforts on AI, even calling in Google’s founders to help with the mission. And Amazon’s Jassy says he’s doubling down on Amazon’s ethos of “Invent and Simplify,” but he’s also moved the goalposts on what it means to innovate to include more basic improvements.
So far, Wall Street has been receptive to these approaches, but that reception has been muted: Daniel Keum, an associate professor of management at Columbia Business School, called the reaction “not crazy but significant.” Still, Meta, Alphabet, and Amazon have a long way to go, with their stock prices roughly 50 percent down from their peak in 2021.
The experts Recode spoke to offered a variety of suggestions for how these companies could solve their problems. Many of those ideas seem abstract and hard to actually accomplish, however. Yoffie, for example, said that these tech giants focus on “reinvigorating small teams that have the flexibility to do creative and new innovations.” But that would require allowing more autonomy in these giant, bureaucratic institutions, not to mention more funding.
“You can help them get back to growth, if and only if they are able to maintain a level of innovation that would enable them to grow new businesses and to expand,” he said. Deciding where to put that money while making necessary cuts comes down to good leadership — something not easily defined.
The advice from Pascarella, the Cornell lecturer, is more quotidian. He says it’s important for companies to “stay true to core products and successes and to not relinquish market position” — something it seems they’re already doing.
University of Washington’s O’Mara emphasized the need for visionary leadership at these companies. “That isn’t necessarily being like, ‘We’re gonna go back to startup days,” she said. “It’s more executive leadership that is providing a clear, exciting vision that is mobilizing and motivating people.”
Keum offered a slightly different perspective. He said that regulatory headwinds and slowing growth mean that these companies should invest in new startups — but not acquire them in their early stages — with the hope that they might lead to big growth. Microsoft’s latest investment in ChatGPT is a good example of how this could work for tech giants, he said.
That’s not exactly the same thing as Meta, Alphabet, and Amazon trying to be more like startups, of course. It might be impossible for these tech companies, which are now massive corporations, to reignite that spirit, according to Bock, the former Google HR head.
“Even with free food, even with the beanbags and lava lamps, we still felt like things could fall apart at any minute,” said Bock, who started at the company in 2006. That existential crisis, and the drive that comes with it, just doesn’t exist anymore, as the company rakes in huge profits despite the latest downturn.
In Bock’s words: “It’s hard to recreate that fear now.”
Jason Del Rey contributed reporting.
A conversation with Robert Garcia, a first-year Democrat, on the coming fireworks in the Republican-controlled House Oversight Committee. Garcia is the first openly LGBTQ immigrant in Congress.
Both congressional Democrats and Republicans have now finalized the lists of members who will sit on committees this Congress, and that includes the high-profile House Committee on Oversight and Accountability.
Rep. James Comer, the top Republican on the committee, has already promised it will be “probably the most exciting committee” in congressional history, and despite recently trying to clear the air about just what the committee has the power to do, he plans to make good on promises during the midterm campaign cycle to lead an onslaught of investigations into the Biden administration, the president’s family, and a variety of red-meat conservative cultural issues, like the cancellation of Newsmax on DirecTV.
Democrats on that committee will be led by Rep. Jamie Raskin, one of the high-profile House managers in the second impeachment trial of former President Donald Trump, and he’ll be joined by an all-star line-up of House Democrats, including eight new members.
Among them: 45-year-old Rep. Robert Garcia of California, the Democrats’ first-year class president, who is the first gay immigrant to serve in Congress. Formerly the mayor of Long Beach, California, a city just south of Los Angeles, Garcia has already taken on the mantle of House Freedom Caucus gadfly — mocking some of the right wing’s most visible figures, like GOP Reps. Marjorie Taylor Greene and Lauren Boebert, for their conspiratorial thinking about subjects like the Covid-19 pandemic and vaccine. It’s a personal subject for Garcia, whose mother and stepfather died of Covid complications in 2020.
He and his first-term colleagues intend to hold these GOP investigators accountable themselves — fitting the new Democratic strategy to go after their Republican inquisitors.
I caught up with Garcia on Friday, when he told me he’s ready to “take on [their] bullshit” and contest every narrative the new committee tries to weave. Our conversation, below, has been edited for length and clarity.
So you got your assignments and you’ll be sitting across from some of the most colorful, conspiratorial, and controversial Republican members of Congress. How are you expecting these committees to behave, and what do you think it will be like to watch from the outside?
Well, clearly, Kevin McCarthy’s put the most extreme members of his caucus on Oversight. We have folks like Marjorie Taylor Greene, Lauren Boebert, and Paul Gosar. These are folks that take their cues from QAnon, that are attacking vaccines, that are obsessed with Hunter Biden’s laptop and the president. So it’s important that we are ready every single day to show up, to fight back with facts, to push back with as much energy as they are. And we are. I’m fired up to take folks like Marjorie Taylor Greene on, take on her bullshit, take on her lies, and ensure that there are strong voices on our side that are actually pushing facts.
Because you’re going to be serving on the Oversight Committee with some of the most well-known Freedom Caucus members. Rep. James Comer, the chair, has said to expect it to be one of the most exciting committees in congressional history. And you will have something like a star-studded cast of Democrats on this committee, too — Maxwell Frost, Dan Goldman, Summer Lee, Jamie Raskin, Alexandria Ocasio-Cortez — have you had conversations with your colleagues about what you’re try to prioritize, and what narratives you’re try to set or change?
Well, I’m really excited about the freshmen that are on the committee. I’ve been talking to Maxwell [Frost, of Florida], and to Dan [Goldman, of New York], and to Summer [Lee, of Pennsylvania] and I think we’re all just fired up to take on the lies and take on the extremists on the other side. Folks like Marjorie Taylor Greene have a huge megaphone that they’re using to hurt people. Take what they’re saying about vaccines, for example. It’s shameful that they’re causing harm to Americans across this country and they’re going to use this committee to weaponize science and facts and truth and so it’s going to be our job to call that out. I think that we’re ready to do that.
With the freshmen that are on this committee, and the whole committee, it’s exciting. To be able to be on a committee with folks like Katie Porter and AOC and Jamie Raskin, and then the freshmen — we’re gonna be really prepared.
Some of these issues — Covid, immigration — are also really personal, right? They’ve had an effect in your life and you have a connection to them. So I’m wondering how you’re thinking about handling those topics and what specifically you’re worried about and expecting from the Republicans.
Chairman Comer has noted that the first hearing is going to be on the pandemic, and it’ll happen [this week]. The fact is that they’re going to use these hearings to try to dismantle support for pandemic prevention and that they’re going to attack how the government responded.
I was mayor of my community the entire time the pandemic was ravaging communities. Schools were closed and people were dying. In my city alone, we lost over 1,300 people. I lost two parents to the pandemic. And so I deeply understand how important vaccines are, how important pandemic prevention is, and what actually happened on the ground. Mayors were on the front lines responding to the pandemic. And if there was a failure in government, it was the Trump administration leaving cities and states to fend for themselves early on.
So I look forward to speaking on that topic from a position of authority and of experience, both of personal experience and personal loss, but also managing a city that President Biden called [out as] having one of the best pandemic responses in the country. He called Long Beach a national model. The governor of California, Gavin Newsom, said we had the best response in the state as it relates to the pandemic. And so I am going to be very engaged coming up.
Have you talked with some of the other freshmen representatives about the specific issues or items that you all plan to confront or respond on?
We’ve all already talked — all the freshmen that are on Oversight — we all know each other, we’re all friends, we all support each other. And the big thing that we’re all united on, is that we’re not going to allow for the extreme voices on the far right to go unchecked. And so when you have folks like Greg [Casar of Texas], and Jared [Moskowitz of Florida] and [Dan Goldman] and [Maxwell Frost], and so many others — we’re all prepared for this moment. And we’re also grateful to the leadership for believing in us to be that voice, because we understand the importance of this committee, and what the Republicans are going to try to do to the country through this committee. We get that we’re on the front lines of this fight.
You’ve definitely started that fight on Twitter at least, taking some shots at Marjorie Taylor Greene and Lauren Boebert. What is it about them that makes you particularly target them?
The two of them, almost more than most folks in Congress, have pushed the absolute craziest lies and falsehoods. And so I think it’s time for people to take them on, and not allow them or give them space to continue doing this. The thing about our class is that we’re also very impatient and we’re aggressive, and we want to make sure that we’re pushing back quickly. And that’s something that we’re really focused on.
That’s really interesting, because rapid response and counter-messaging isn’t necessarily something that previous classes of Democratic members of Congress were particularly great at, especially in today’s media environment. What is it about you and your classmates that makes you better suited for this match?
The group of freshmen coming in are very much in touch with our communities back home, we understand the power of communicating to the public, to a younger audience, to folks who are frustrated with politics. Folks like Comer, and Taylor Greene, and Boebert, and Gosar — all these folks don’t represent the mainstream of American politics or mainstream for really anything. They’re on the fringe. The Freedom Caucus are on the fringe of the country. So all of us that are coming in as freshmen — you probably get a sense of this — we’re unapologetic, we’re all pissed off about injustices in this country, and for me, personally, when I hear or when I see folks like Marjorie Taylor Greene lie about vaccines that could have saved my mom’s life, that she would have absolutely been first in line to take, that’s the kind of stuff, to me, that’s not acceptable.
It also seems to me like this new class of Democrats are better versed in social media, cultural references, and taking advantage of a different communications ecosystem, something previous committees in the Obama years, didn’t have or couldn’t do well. You were tweeting about RuPaul’s Drag Race, for example, when teasing Rep. George Santos and Taylor Greene.
I think that’s right. One thing you have from a lot of the freshmen is we come from very different types of backgrounds and you started seeing that happen in Congress a few years ago, when different kinds of folks started getting elected. It’s more of a working-class group of new folks. It’s folks that are maybe a little bit more aggressive, a little more impatient, so that means we bring our full selves. For me, yes, I happen to be a RuPaul’s Drag Race fan, I love drag, I think it’s a great art form. And I’m not going to apologize for that.
When you have folks that are attacking artists, like drag performers, and you have folks that are attacking folks that are trans, they need to be called out. For me, as a gay person and as a queer person, I have a responsibility to call them out. And I’m going to use whatever tools I have available to do that.
You’ll also be on the Homeland Security committee. What’s it going to be like to be in the minority and try to get stuff done?
Well our plan is to win back the House in two years. But yeah, not a lot of stuff is going to get through this Congress. Republicans are focused on national abortion bans, demonizing gay kids, wars against books and women — but we’ve heard from some more moderate Republicans, interested in immigration reform, and they understand there is a labor shortage and immigration reform can help.
Being on Homeland Security is an opportunity also for those progressives that are on the committee … to bring attention to issues around justice for immigrants, for asylum seekers at the border, and making sure that these agencies are actually doing their job and not harming people. And we want to protect the American people from all sorts of attacks, and that includes domestic terrorists, and that includes white supremacists.
And you’ll be on that committee with Marjorie Taylor Greene, too.
Yeah, I don’t know what kind of luck of the draw I have that she also happens to be on Homeland Security, which is, like, totally insane.
But it’s going to be, again, important on that committee to push back against the demonizing of migrants and the attacks on asylum seekers and being in a position where the federal law enforcement agencies can get scrutiny.
And finally, on a more personal level — are you ready to take the heat? To face the onslaught of supporters these right-wing members have online or in real life, here in DC or at home?
I’ve definitely thought about those things. But I’ve been through more than most folks, as a queer immigrant. Gay people [and] immigrants are some of the most resilient people in this country. This moment requires a different type of leadership and Congress member. That’s what our class brings and we’re fired up and ready to go.
Forever and Lady Cadet impress -
Singer Sargent and Big Red impress -
Stellantis, Fast Pace, Golden Peaks and Dragon’s Gold please -
Team Fore Horsemen lifts GKD Gold Cup - Bushwackers and The Pro and his Crew sealed the top two slots in the gross category
Indian men’s hockey team chief coach Graham Reid resigns following World Cup debacle - Analytical coach Greg Clark and scientific advisor Mitchell David Pemberton also step down
Parents, students of CBSE school in Gudalur against change of Class 10 board exam centre -
Agri hackathon planned as part of the 2023 VAIGA expo -
Jailed godman Asaram convicted of rape by court in Gandhinagar - Asaram was arrested by the Jodhpur police on August 31, 2013 in a separate rape case and has been in jail since then
HC moved for probe into misappropriation of funds of charitable trust -
Kerala High Court adjourns election petition of defeated LDF candidate in Perinthalmanna to Feb. 1 - K.P. Mohammed Musthafa challenges election on the ground that 348 postal ballots of Absentee Voters were improperly rejected
German chancellor says he won’t send fighter jets to Ukraine - The request for the jets comes just days after Germany committed to supplying Leopard 2 tanks.
Ukraine: Boris Johnson says Putin threatened him with missile strike - Boris Johnson says he had an “extraordinary” call with President Putin before the Ukraine invasion.
Ukraine war: Russian athletes cannot be allowed at Olympics, Zelensky says - Olympics chiefs say Russian and Belarusian athletes could compete as neutrals, but Ukraine opposes the move.
Erdogan says Turkey may block Sweden’s Nato membership bid - The president said he would not support Sweden’s bid until it extradited dozens of “terrorists”.
Petr Pavel: Ex-general beats populist rival in Czech election - Defeated former PM Andrej Babis conceded to retired Nato general Petr Pavel on Saturday afternoon.
The flight tracker that powered @ElonJet has taken a left turn - ADS-B Exchange is now owned by private equity—and now even its biggest fans are bailing. - link
The weekend’s best deals: Apple computers, Kindles, 4K TVs, charging cables, and more. - Dealmaster also has iPads, storage solutions, and computer components and peripherals. - link
Most criminal cryptocurrency is funneled through just 5 exchanges - A few big players are moving a “shocking” amount of currency in a tight market. - link
Annual? Bivalent? For all? Future of COVID shots murky after FDA deliberations - FDA seems sold on annual shots, but advisors call for a lot more data. - link
D&D maker retreats from attempts to update longstanding “open” license - Negative response to latest draft was “in such high volume” as to force WotC’s hand. - link
Russian prime minister Medvedev comes to Putin and nervously tells him to abolish time zones. -
" I fly to another city, call home and everyone is asleep. I woke you up at 4AM but I thought it was only evening. - I call Angela Merkel to congratulate her on her birthday and she tells me she had it yesterday. - I wish the Chinese President a happy New Year, and he says it will be tomorrow."
“Indeed” Putin replies “but that’s only minor stuff. Remember when that Polish plane crashed with their president? I called them to express my condolences, but the plane hadn’t taken off yet!!”
submitted by /u/MudakMudakov
[link] [comments]
My wife says I get mean when I drink whiskey. Now I drink Canadian whiskey. -
I am still mean but I am sorry, too.
submitted by /u/Fuzzie8
[link] [comments]
A woman, cranky because her husband was late coming home again, decided to leave a note, saying, “I’ve had enough and have left you. Don’t bother coming after me.” -
Then she hid under the bed to see his reaction.
After a short while, the husband comes home and she could hear him in the kitchen before he comes into the bedroom.
She could see him walk towards the dresser and pick up the note.
After a few minutes, he wrote something on it before picking up the phone and calling someone.
"She’s finally gone…yeah I know, about bloody time, I’m coming to see you, put on that sexy French nightie.
I love you…can’t wait to see you…we’ll do all the naughty things you like."
He hung up, grabbed his keys and left.
She heard the car drive off as she came out from under the bed.
Seething with rage and with tears in her eyes she grabbed the note to see what he wrote…
“I can see your feet. We’re outta bread: be back in five minutes.”
submitted by /u/GenesisWorlds
[link] [comments]
A newly married couple -
A newly married couple make their way to bed and everything is going well until…
“Ooh! Oh! Look at that! What’s wrong with it?” cries the bride.
“It’s just my junk!” says the groom, offended.
“Yes, but’s what’s wrong with it? They’re not supposed to look like that! It’s all twisted!”
“That’s what they look like!” he replies.
“Have you ever SEEN another man’s junk?” the bride demands.
“Well, no - but I’m normal! This is what they look like!”
“No they don’t!” she tells him. “It’s… warped! There’s all grooves around it! It’s supposed to be… smooth and straight!” Long story short, the argument isn’t settled and the marriage remains unconsumated.
Next day the newlyweds are walking around town, and the groom announces he needs the bathroom.
“There’s a public gents over there,” says the bride. “And while you’re in there, take a look at the man next to you… see if his is the same as yours.”
The groom goes in, and returns in a few minutes looking very crestfallen.
“You’re right!” he says. “They are supposed to be smooth and straight. And I found out what I’ve been doing wrong.”
“What?”
“Well they all shake theirs dry. I’ve been wringing mine out.”
submitted by /u/comcphee
[link] [comments]
A thief stuck a pistol in a man’s ribs and said, “Give me your money.” The gentleman, shocked by the sudden attack, said: “You cannot do this, I’m a United States congressman!” -
The thief said, “In that case, give me my money!”
submitted by /u/Jackrwood
[link] [comments]