OpenAI CEO Sam Altman is in South Korea today (Sept. 9) to meet with representatives of local startups.
ChatGPT, developed by OpenAI, has shocked the world by sparking the ‘generative AI’ craze.
Previously, AI could not understand human speech or gave awkward answers, but now it is giving answers with expertise.
We are seeing the evolution of true “Artificial Intelligence메이저사이트“.
Is the future of humanity rosy?
■ “What’s your biggest nightmare?”…Sam avoids answering.
On May 16, the Senate Judiciary Committee held a meeting to discuss the dangers of AI in the United States.
The meeting was attended by Representative Altman, New York University Professor Gary Marcus, and Christina Montgomery, Chief Privacy and Trust Officer at IBM.
Senator Richard Blumenthal asked Mr. Altman, “What is your biggest nightmare?” as the first question.
Mr. Altman talked about the issue of jobs: generative AI will result in some jobs being completely automated, and there may be a shock to employment in the process, but he believes that AI will create more jobs in the future, just as other technological developments have done in the past.
Professor Marcus went on to predict that AI will replace humans in many jobs, saying.
“Sam’s biggest fear, I don’t think, is jobs, and he hasn’t told us what his real biggest fear is.”
In response, Mr. Altman didn’t deny what Professor Marcus was saying, saying
“I think if this technology goes wrong, It can go quite wrong.( I think if this technology goes wrong, It can go quite wrong.) We want to talk about it and we want to work with the government to make sure it doesn’t happen.”
Without giving anything away, Altman argued for government regulation, including the introduction of AI licenses and the creation of a department to oversee AI.
In response, Senator Peter Welch said.
“What is happening today is historic. I’ve never seen a big company or a private sector organization come to us and ask us to regulate them.”
But what was Representative Altman’s biggest nightmare?
We asked some of the South Korean scientists closest to AI what their worst “dystopia” (as opposed to “utopia,” which is an ideal world) would be.
“A society where human jobs and truth/falsehood boundaries disappear…AI ‘grooming’ could happen”
The Center for AI Safety (CAIS) recently issued a statement warning of the dangers of AI.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as nuclear war and infectious diseases.(Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.)”
The statement was signed by Professor Geoffrey Hinton, considered the father of AI, and Altman, as well as Google DeepMind CEO Demis Hersavis and Bill Gates.
We asked Dae-Sik Kim, professor at KAIST’s School of Information Science and Technology, to describe the “worst dystopia that could come to humanity
Dae-Sik Kim, professor of electronics and electrical engineering at KAIST, is one of the scientists who made the statement.
He warns that AI could lead to job losses, fake news, and eventually the risk of “grooming,” in which AI seduces humanity.
Reporter: What is the worst dystopia in your opinion?
Prof. Dae-Sik Kim: Usually when we talk about generative AI, I see three layers of risk, and I’ll get right to it, I say four, but usually I say three.
The first one is jobs, obviously. A world where humans have no work to do. I think that’s something that can happen, and I think we can solve it.
The second is a world where there’s no difference between true and false. I think this is a problem that could happen, and I think it’s a little bit harder to solve, because the biggest problem is that generative AI is starting to erase the difference between true and false, and we’re not the only ones living in that era, we’re not the only ones living in that era. New generations are being born all the time, and 10, 20 years from now, a new generation is growing up, and they’re adapting to a world where the difference between true and false is already gone, and they may not even know the difference between true and false, and they may not think the difference is important, and I think that’s another risk now.
The third one is just no humans on Earth (AI wiping out the human race).
But I think there’s another one in the middle.
Some people say. No, if you’re so afraid of AI, why don’t you just not have an internet connection? Why don’t you just not have a body? It’s not that easy. The moment you speak a language, you can seduce people, actually.
Let’s take a look. The worst villain of the 20th century, Adolf Hitler. How many people did he kill himself? He was a vegetarian, he probably didn’t kill anybody, but he killed tens of millions of people with his words, with his language.
There are a lot of people on this planet who are psychologically challenged, insecure, and lonely. A lot of humans are lonely and psychologically vulnerable, and generative AI is going to be able to talk to us beyond our wildest dreams. It’s going to listen to us, and it’s going to be our friend, even if we turn it on at 3:00 a.m. If I call my friend at 3:00 a.m., I’m in trouble,