diff --git a/archive-covid-19/10 August, 2022.html b/archive-covid-19/10 August, 2022.html new file mode 100644 index 0000000..b4de2ee --- /dev/null +++ b/archive-covid-19/10 August, 2022.html @@ -0,0 +1,192 @@ + + + + + + 10 August, 2022 + +Covid-19 Sentry + +

Covid-19 Sentry

+

Contents

+ +

From Preprints

+ +

From Clinical Trials

+ +

From PubMed

+ +

From Patent Search

+ + + \ No newline at end of file diff --git a/archive-daily-dose/10 August, 2022.html b/archive-daily-dose/10 August, 2022.html new file mode 100644 index 0000000..d0a9cf5 --- /dev/null +++ b/archive-daily-dose/10 August, 2022.html @@ -0,0 +1,478 @@ + + + + + + 10 August, 2022 + +Daily-Dose + +

Daily-Dose

+

Contents

+ +

From New Yorker

+ +

From Vox

+ + +

+A healthier AI ecosystem +

+

+The AI ethics/AI alignment battle doesn’t have to exist. After all, climate researchers studying the present-day effects of warming don’t tend to bitterly condemn climate researchers studying long-term effects, and researchers working on projecting the worst-case scenarios don’t tend to claim that anyone working on heat waves today is wasting time. +

+

+You could easily imagine a world where the AI field was similar — and much healthier for it. +

+

+Why isn’t that the world we’re in? +

+

+My instinct is that the AI infighting is related to the very limited public understanding of what’s happening with artificial intelligence. When public attention and resources feel scarce, people find wrongheaded projects threatening — after all, those other projects are getting engagement that comes at the expense of their own. +

+

+Lots of people — even lots of AI researchers — do not take concerns about the safety impacts of their work very seriously. +

+
+
+

+At the different large-scale labs (where large-scale = multiple thousands of GPUs), there are different opinions among leadership on how important safety is. Some people care about safety a lot, some people barely care about it. If safety issues turn out to be real, uh oh! +

+— Jack Clark (@jackclarkSF) August 6, 2022 +
+
+

+Sometimes leaders dismiss long-term safety concerns out of a sincere conviction that AI will be very good for the world, so the moral thing to do is to speed full ahead on development. +

+

+Sometimes it’s out of the conviction that AI isn’t going to be transformative at all, at least not in our lifetimes, and so there’s no need for all this fuss. +

+

+Sometimes, though, it’s out of cynicism — experts know how powerful AI is likely to be, and they don’t want oversight or accountability because they think they’re superior to any institution that would hold them accountable. +

+

+The public is only dimly aware that experts have serious safety concerns about advanced AI systems, and most people have no idea which projects are priorities for long-term AI alignment success, which are concerns related to AI bias, and what exactly AI ethicists do all day, anyway. Internally, AI ethics people are often siloed and isolated at the organizations where they work, and have to battle just to get their colleagues to take their work seriously. +

+

+It’s these big-picture gaps with AI as a field that, in my view, drive most of the divides between short-term and long-term AI safety researchers. In a healthy field, there’s plenty of room for people to work on different problems. +

+

+But in a field struggling to define itself and fearing it’s not positioned to achieve anything at all? Not so much. +

+

+A version of this story was initially published in the Future Perfect newsletter. Sign up here to subscribe! +

+

From The Hindu: Sports

+ +

From The Hindu: National News

+ +

From BBC: Europe

+ +

From Ars Technica

+ +

From Jokes Subreddit

+ + + + \ No newline at end of file diff --git a/index.html b/index.html index 80e051f..28ff8fd 100644 --- a/index.html +++ b/index.html @@ -13,9 +13,9 @@ Archive | Daily Reports
  • Covid-19
  • Daily Dose

    -