For artificial intelligence, is it long-term in the near future?







AU 1

NBC anchor Lester Holt interviews two of Silicon Valley’s biggest CEOs during the Aspen Festival of Ideas on Wednesday. Sam Altman of OpenAI and Brian Chesky of Airbnb have been friends since they met in 2008.Â


Artificial intelligence is not a new technology, as Vivian Schiller – the executive director of the Aspen Institute’s Aspen Digital – emphasized several times during the Aspen Festival of Ideas, which ended on Saturday.

Artificial intelligence models have been around for decades, but recently they have captured the public imagination in a new way as one model, OpenAI’s ChatGPT, made the technology accessible and usable to the public. The result has been an “ad cycle,” as Airbnb co-founder/CEO and Ideas guest Brian Chesky called it, that has gotten more attention than the advent of the Internet.

Chesky said the ad cycle may be overkill, explaining that AI isn’t even a core component of most phone apps. Likewise, Schiller often cited “Amara’s law,” which states that people tend to overestimate the short-term effects of new technology and underestimate the long-term effects.

But with the breakneck pace of AI development and the acceleration of technology innovation in general, experts at Ideas said the long-term effects may not be so far-fetched.

During a panel discussion on Monday, Schiller asked University of Manchester professor and historian David Olusoga how quickly new technologies typically lead to large-scale disruption in society. Olusoga agreed that technologies can take a long time to reach the public and change the world—for example, James Watt’s steam engine was invented in the 1760s but didn’t change the world until the 1830s. Now, however, Olusoga said that new technologies tend to be adopted more quickly.

“We can see the gap between innovation and disruption narrowing in the 20th century,” Olusoga said, arguing that the adoption of electricity and the Internet moved faster than that of the steam engine.

Despite his misgivings about the hype, Chesky noted in his panels that 21st-century Internet platforms have rapidly moved from innovation to widespread disruption, changing the way Silicon Valley operates. Chesky argued that attitudes about recent technology revolutions in the 2000s have already shifted from starry-eyed naivety to cautious caution.

Changes in attitude

When they first met at Silicon Valley startup accelerator Y Combinator in 2008, Chesky said he and OpenAI co-founder and CEO Sam Altman were part of a fast-paced, move-first, think-later culture. , which was mostly naive. to the negative impacts that large technology companies can have.

“When I came to Silicon Valley, the word ‘tech’ might as well have been a dictionary definition of ‘good,'” Chesky said. “Facebook was a way to share photos with your friends, YouTube was cat videos, Twitter was talking for what you did today. I think there was a general innocence.â€

Now, Chesky said, the culture has changed. In the decade since the two tech titans’ time at Y Combinator, the world has seen social media facilitate government overthrows in the Middle East and election meddling in the United States. US politicians regularly talk about the mental health effects of social media on today’s children, and governments have passed sweeping regulations on big tech firms.

“I think over time we’ve realized that when you put a tool in the hands of hundreds of millions of people, they’re going to use it in ways that you didn’t think of,” Chesky. said.

Tech stalwart Kara Swisher agreed on her panel that attitudes in Silicon Valley seem to be changing. Swisher said he has enjoyed meeting young tech entrepreneurs in recent years, who often tend to have “a better idea of ​​the risk of the world we live in.”

These attitudes have translated into nervousness and controversy surrounding the advent of large publicly accessible language models.

Altman, who spoke on “Afternoon Talk” on Wednesday, was fired from OpenAI in November because then-board members were concerned about how fast their AI was advancing. Former board members have since said Altman lied to them several times about the company’s security processes. Altman later returned to the company, which now has a new board.

He described the ordeal as “super painful” as he addressed the Idea audience on Wednesday, but said he understood the former board members. He described them as “nervous about the continued development of AI.” Altman disagreed that technology was developing too quickly.

“Even though I don’t really agree with what they think, what they’ve said since then and how they’ve acted, I think they’re generally good people who are nervous about the future,” Altman said.

“A lot of confidence to earn.”

Whether “too” fast or not, the experts at Ideas certainly agreed that technology is moving fast. Government officials and private sector players asserted that technology is moving faster than governments can regulate it.

“Politics just doesn’t move at the same pace as technology,” said Karen McCluskie, deputy director of technology at the UK’s Department of Business and Trade. “If technology is about moving fast and breaking things, then diplomacy is about moving slowly and getting things right. These are opposite ideas. But that will have to change

Technology is moving so fast, some experts said, that many technologists are worried they’ll run out of data to train AI models (Altman suspects that will be a big problem). The dilemma is serious enough that some experts have proposed using “synthetic data” to train models. And while the computing power and electricity needed to run the models make them prohibitively expensive, experts say those costs are likely to decrease in the near future, potentially making development faster and more competitive.

Technology leaders say they are facing unprecedented speed with unprecedented caution. Rather than fight to speed up a sluggish acceptance of their new technology, executives at Ideas said they are deliberately delaying product releases while they conduct security checks. Altman said OpenAI has sometimes not released products or taken “long periods of time” to evaluate them before releasing them.

“What will our lives be like when the computer doesn’t understand us and recognize us and help us do these things, but we can tell it to discover physics or create a great company?” Altman said. “That’s a lot of trust we have to earn as custodians of this technology. And we’re proud of our past. If you look at the systems we’ve put in place and the time and care we’ve taken to bring them to an accepted level of robustness and safety generally, it’s way beyond what people thought we’d be able to do.â€

Chesky compared the acceleration in technology to driving.

“If you imagine you’re in a car, the faster the car goes, the more you have to look ahead and anticipate turns,” he said.

Government officials at Ideas said some of those corners are already flying by the window. In a session on the role of AI in elections, Schiller pointed to several examples of attempted voter fraud or election interference using AI-generated false information and media. So-called “bad actors” have used AI to deceive voters in Slovakia, Bangladesh and New Hampshire.

Ginny Badane, general manager of Microsoft’s Democracy Forward program, said the Russian government also used AI to produce a fake documentary ad mocking the Olympic Committee and the upcoming Paris Olympics, from which Russia has been banned. The video uses a simulation of Tom Cruise’s voice as the narrator.

NBC anchor Lester Holt, who interviewed Chesky and Altman, used a different vehicle metaphor than Chesky, saying “most of us are just passengers on this bus, watching you guys do these incredible things and Compare it to the Manhattan Project and ask yourself, “Where is this going?”







AI2

Michigan Secretary of State Jocelyn Benson discusses the role of artificial intelligence in elections at the Aspen Festival of Ideas on Friday. Michigan has begun a campaign to educate voters about the possibility of bad actors using fake videos and images to influence elections. Â




Some successes

Despite its rapid development, experts say AI is still far from the revolution it promises to be.

While the successes have been groundbreaking—one company, New York-based EvolutionaryScale, can now use AI to generate specialized proteins for personalized cancer care—AI still does not play a critical role in most of our lives. For a technology that has been compared to the Internet and even fire suppression, experts say we’re only seeing the beginning of its potential impacts.

“If you look at your phone and look at the home screen and ask what apps are fundamentally different because of generative AI, I would say essentially none. Maybe the algorithms are a little different,” Chesky said.

But while AI may not have changed the world yet, executives said it certainly has changed the world for some individuals.

“One of the most fun parts of the job is getting an email every day from people who are using these tools in amazing ways,” Altman said. “People say, like, ‘I’ve been able to diagnose this health problem that I’ve had for years, but I couldn’t figure it out, and it was making my life miserable, and I just typed my symptoms into ChatGPT and got this idea. , I went to see a doctor and now I am completely cured

Holt asked Altman where he would like to be in the next five years.

“Further down the same road,” he replied.

#artificial #intelligence #longterm #future
Image Source : www.aspendailynews.com

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top