{"id":5720,"date":"2024-10-18T13:26:19","date_gmt":"2024-10-18T13:26:19","guid":{"rendered":"https:\/\/tech.newat9.com\/index.php\/2024\/10\/18\/silicon-valley-takes-agi-seriously-washington-should-too\/"},"modified":"2024-10-18T13:26:19","modified_gmt":"2024-10-18T13:26:19","slug":"silicon-valley-takes-agi-seriously-washington-should-too","status":"publish","type":"post","link":"https:\/\/tech.newat9.com\/index.php\/2024\/10\/18\/silicon-valley-takes-agi-seriously-washington-should-too\/","title":{"rendered":"Silicon Valley Takes AGI Seriously\u2014Washington Should Too"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div id=\"article-body-main\">\n<p class=\"self-baseline px-0 font-pt-serif text-17px leading-7 tracking-0.5px\"><a href=\"https:\/\/time.com\/6271657\/a-to-z-of-artificial-intelligence\/\" target=\"_blank\" rel=\"noopener\">Artificial General Intelligence<\/a>\u2014machines that can learn and perform any cognitive task that a human can\u2014has long been relegated to the realm of science fiction. But <a href=\"https:\/\/www.businessinsider.com\/openai-nears-ai-systems-reason-cause-concern-sam-altman-chatgpt-2024-7\" target=\"_blank\" rel=\"noopener\">recent developments<\/a> show that AGI is no longer a distant speculation; it\u2019s an impending reality that demands our immediate attention.<\/p>\n<p class=\"self-baseline px-0 font-pt-serif text-17px leading-7 tracking-0.5px\">On Sept. 17, during a Senate Judiciary Subcommittee <a href=\"https:\/\/www.judiciary.senate.gov\/committee-activity\/hearings\/oversight-of-ai-insiders-perspectives\" target=\"_blank\" rel=\"noopener\">hearing<\/a> titled \u201cOversight of AI: Insiders\u2019 Perspectives,\u201d whistleblowers from leading AI companies sounded the alarm on the rapid advancement toward AGI and the glaring lack of oversight. <a href=\"https:\/\/time.com\/7012863\/helen-toner\/\" target=\"_blank\" rel=\"noopener\">Helen Toner<\/a>, a former board member of OpenAI and director of strategy at Georgetown University\u2019s Center for Security and Emerging Technology, <a href=\"https:\/\/www.judiciary.senate.gov\/imo\/media\/doc\/2024-09-17_pm_-_testimony_-_toner.pdf\" target=\"_blank\" rel=\"noopener\">testified<\/a> that, \u201cThe biggest disconnect that I see between AI insider perspectives and public perceptions of AI companies is when it comes to the idea of artificial general intelligence.\u201d She continued that leading AI companies such as OpenAI, Google, and Anthropic are \u201ctreating building AGI as an entirely serious goal.\u201d<\/p>\n<p class=\"self-baseline px-0 font-pt-serif text-17px leading-7 tracking-0.5px\">Toner\u2019s co-witness William Saunders\u2014a former researcher at OpenAI who <a href=\"https:\/\/time.com\/6985866\/openai-whistleblowers-interview-google-deepmind\/\" target=\"_blank\" rel=\"noopener\">recently resigned<\/a> after losing faith in OpenAI acting responsibly\u2014echoed similar sentiments to Toner, <a href=\"https:\/\/www.judiciary.senate.gov\/imo\/media\/doc\/2024-09-17_pm_-_testimony_-_saunders.pdf\" target=\"_blank\" rel=\"noopener\">testifying<\/a> that, \u201cCompanies like OpenAI are working towards building artificial general intelligence\u201d and that \u201cthey are raising billions of dollars towards this goal.\u201d<\/p>\n<p class=\"self-baseline px-0 font-pt-serif text-17px leading-7 tracking-0.5px\"><strong>Read More: <\/strong><em><a href=\"https:\/\/time.com\/6556168\/when-ai-outsmart-humans\/\" target=\"_blank\" rel=\"noopener\">When Might AI Outsmart Us? It Depends Who You Ask<\/a><\/em><\/p>\n<p class=\"self-baseline px-0 font-pt-serif text-17px leading-7 tracking-0.5px\">All three leading AI labs\u2014OpenAI, Anthropic, and Google DeepMind\u2014are more or less explicit about their AGI goals. OpenAI\u2019s mission states: \u201cTo ensure that artificial general intelligence\u2014by which we mean highly autonomous systems that outperform humans at most economically valuable work\u2014benefits all of humanity.\u201d Anthropic focuses on \u201cbuilding reliable, interpretable, and steerable AI systems,\u201d aiming for \u201csafe AGI.\u201d Google DeepMind aspires \u201cto solve intelligence\u201d and then to use the resultant AI systems \u201cto solve everything else,\u201d with co-founder Shane Legg stating unequivocally that he expects \u201chuman-level AI will be passed in the mid-2020s.\u201d New entrants into the AI race, such as <a href=\"https:\/\/time.com\/6294278\/elon-musk-xai\/\" target=\"_blank\" rel=\"noopener\">Elon Musk\u2019s xAI<\/a> and <a href=\"https:\/\/time.com\/6990076\/safe-superintelligence-inc-announced\/\" target=\"_blank\" rel=\"noopener\">Ilya Sutskever\u2019s Safe Superintelligence Inc.<\/a>, are similarly focused on AGI.<\/p>\n<p class=\"self-baseline px-0 font-pt-serif text-17px leading-7 tracking-0.5px\">Policymakers in Washington have mostly dismissed AGI as either marketing hype or a vague metaphorical device not meant to be taken literally. But last month\u2019s hearing might have broken through in a way that previous discourse of AGI has not. Senator Josh Hawley (R-MO), Ranking Member of the subcommittee, commented that the witnesses are \u201cfolks who have been inside [AI] companies, who have worked on these technologies, who have seen them firsthand, and I might just observe don\u2019t have quite the vested interest in painting that rosy picture and cheerleading in the same way that [AI company] executives have.\u201d<\/p>\n<p class=\"self-baseline px-0 font-pt-serif text-17px leading-7 tracking-0.5px\">Senator Richard Blumenthal (D-CT), the subcommittee Chair, was even more direct. \u201cThe idea that AGI might in 10 or 20 years be smarter or at least as smart as human beings is no longer that far out in the future. It\u2019s very far from science fiction. It\u2019s here and now\u2014one to three years has been the latest prediction,\u201d he said. He didn\u2019t mince words about where responsibility lies: \u201cWhat we should learn from social media, that experience is, don\u2019t trust Big Tech.\u201d<\/p>\n<p class=\"self-baseline px-0 font-pt-serif text-17px leading-7 tracking-0.5px\">The apparent shift in Washington reflects public opinion that has been more willing to entertain the possibility of AGI\u2019s imminence. In a <a href=\"https:\/\/drive.google.com\/file\/d\/1PkoY2SgKXQ_vFxPoaZK_egv-N150WR7O\/view\" target=\"_blank\" rel=\"noopener\">July 2023 survey<\/a> conducted by the AI Policy Institute, the majority of Americans said they thought AGI would be developed \u201cwithin the next 5 years.\u201d Some 82% of respondents also said we should \u201cgo slowly and deliberately\u201d in AI development.<\/p>\n<p class=\"self-baseline px-0 font-pt-serif text-17px leading-7 tracking-0.5px\">That\u2019s because the stakes are astronomical. Saunders detailed that AGI could lead to cyberattacks or the creation of \u201cnovel biological weapons,\u201d and Toner warned that many leading AI figures believe that in a worst-case scenario AGI \u201ccould lead to literal human extinction.\u201d<\/p>\n<p class=\"self-baseline px-0 font-pt-serif text-17px leading-7 tracking-0.5px\">Despite these stakes, the U.S. has instituted almost no regulatory oversight over the companies racing toward AGI. So where does this leave us?<\/p>\n<p class=\"self-baseline px-0 font-pt-serif text-17px leading-7 tracking-0.5px\">First, Washington needs to start taking AGI seriously. The potential risks are too great to ignore. Even in a good scenario, AGI could upend economies and displace <a href=\"https:\/\/www.theguardian.com\/technology\/2024\/mar\/27\/ai-apocalypse-could-take-away-almost-8m-jobs-in-uk-says-report\" target=\"_blank\" rel=\"noopener\">millions of jobs<\/a>, requiring society to adapt. In a bad scenario, AGI could <a href=\"https:\/\/time.com\/6258483\/uncontrollable-ai-agi-risks\/\" target=\"_blank\" rel=\"noopener\">become uncontrollable<\/a>.<\/p>\n<p class=\"self-baseline px-0 font-pt-serif text-17px leading-7 tracking-0.5px\">Second, we must establish regulatory guardrails for powerful AI systems. Regulation should involve government transparency into <a href=\"https:\/\/time.com\/6985504\/openai-google-deepmind-employees-letter\/\" target=\"_blank\" rel=\"noopener\">what\u2019s going on with the most powerful AI systems<\/a> that are being created by tech companies. Government transparency will reduce the chances that society is caught flat-footed by a tech company developing AGI before anyone else is expecting. And mandated security measures are needed to prevent U.S. adversaries and other bad actors from stealing AGI systems from U.S. companies. These light-touch measures would be sensible even if AGI weren\u2019t a possibility, but the prospect of AGI heightens their importance.<\/p>\n<p class=\"self-baseline px-0 font-pt-serif text-17px leading-7 tracking-0.5px\"><strong>Read More: <\/strong><em><a href=\"https:\/\/time.com\/6848922\/ai-regulation\/\" target=\"_blank\" rel=\"noopener\">What an American Approach to AI Regulation Should Look Like<\/a><\/em><\/p>\n<p class=\"self-baseline px-0 font-pt-serif text-17px leading-7 tracking-0.5px\">In a particularly concerning part of Saunders\u2019 testimony, he said that during his time at OpenAI there were long stretches where he or hundreds of other employees would be able to \u201cbypass access controls and steal the company\u2019s most advanced AI systems, including GPT-4.\u201d This lax attitude toward security is bad enough for U.S. competitiveness today, but it is an absolutely unacceptable way to treat systems on the path to AGI. The comments were another powerful reminder that tech companies cannot be trusted to self-regulate.<\/p>\n<p class=\"self-baseline px-0 font-pt-serif text-17px leading-7 tracking-0.5px\">Finally, public engagement is essential. AGI isn\u2019t just a technical issue; it\u2019s a societal one. The public must be informed and involved in discussions about how AGI could impact all of our lives.<\/p>\n<p class=\"self-baseline px-0 font-pt-serif text-17px leading-7 tracking-0.5px\">No one knows how long we have until AGI\u2014what Senator Blumenthal referred to as \u201cthe 64 billion dollar question\u201d\u2014but the window for action may be rapidly closing. Some AI figures including Saunders think it may be in <a href=\"https:\/\/time.com\/6556168\/when-ai-outsmart-humans\/\" target=\"_blank\" rel=\"noopener\">as little as three years<\/a>. <\/p>\n<p class=\"self-baseline px-0 font-pt-serif text-17px leading-7 tracking-0.5px\">Ignoring the potentially imminent challenges of AGI won\u2019t make them disappear. It\u2019s time for policymakers to begin to get their heads out of the cloud. <\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/time.com\/7093792\/ai-artificial-general-intelligence-risks\/\" target=\"_blank\" rel=\"noopener\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial General Intelligence\u2014machines that can learn and perform any cognitive task that a human can\u2014has long been relegated to the [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":5721,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/posts\/5720"}],"collection":[{"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/comments?post=5720"}],"version-history":[{"count":0,"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/posts\/5720\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/media\/5721"}],"wp:attachment":[{"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/media?parent=5720"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/categories?post=5720"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/tags?post=5720"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}