Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Inside the AI Party at the End of the World


In $ 30 Million Mansion perched on a cliff overlooking the Golden Gate Bridge, a group of researchers, philosophers and AI technologists gathered to discuss the end of humanity.

The Sunday afternoon symposium, called “worthy successor”, revolved around a provocative idea From the entrepreneur Daniel Faggella: the “moral goal” of the advanced AI should be to create a form of intelligence so powerful and wise that “you would likely prefer that he (not humanity) determines the future path of life itself”.

Faggella made the theme clear in her invitation. “This event is very focused on posthumous transition,” he wrote to me via X DMS. “Not on Act which serves as a tool for humanity.”

A party filled with futuristic fantasies, where participants discuss the end of humanity as a rather than metaphorical logistical problem, could be described as a niche. If you live in San Francisco and work in AI, it’s a typical Sunday.

About 100 people have treated non -alcoholic and nibbled cocktails on cheese plates near the floor windows to the ceiling against the Pacific Ocean before meeting to hear three discussions on the future of intelligence. A participant sported a shirt who said: “Kurzweil was right”, apparently a reference to Ray KurzweilThe futuristic who predicted machines will go beyond human intelligence in the years to come. Another was wearing a shirt that said “Does that help us get safe?” Accompanied by an emoji thinking face.

Faggella told Wired that he had launched this event because “the big laboratories, people who know that Act is likely to end humanity, do not talk about it because the incentives do not allow it” and referenced the first comments of technology chiefs like Elon Musk, Sam Altman, and Demis HassabisWho “were all quite frank on the possibility that Agi kills us all”. Now that the incentives are to be contributed, he says: “They are all during the bore to build it.” (To be fair, musk always talk about the risks Associated with advanced AI, although this did not prevent him from running in advance).

Is LinkedIn, Faggeella praised A list of star guests, with founders of AI, researchers from all the best Western AI laboratories, and “most important philosophical thinkers on AG”.

The first speaker, Ginevera DavisA writer based in New York warned that human values ​​could be impossible to translate into AI. Machines can never understand what it is to be aware, she said, and try to cod hard human preferences in future systems could be short seen. Instead, she proposed a high idea for consonance called “cosmic alignment” – building AI which can seek deeper and more universal values ​​that we have not yet discovered. His slides often showed an image apparently generated by AI of a techno-utopia, with a group of humans gathered on a grass milk overlooking a futuristic city in the distance.

The criticisms of the consciousness of the machines will say that the models of large languages ​​are simply stochastic parrots – a metaphor invented by a group of researchers, some of whom worked in Google, which wrote In a famous article That LLM does not really understand language and are only probabilistic machines. But this debate was not part of the symposium, where the speakers took on an idea that superintendent arrives and quickly.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *