41
41
0
Well, it appears the aliens have landed…
The aliens I am referring to are not in the form of little green beings in flying saucers. Rather, I am referring to human-created artificial intelligence (AI). Given what some have described as a revolution in AI with ChatGPT and other generative AI platforms, AI may now represent a new, intelligent being on planet earth.
But how does AI stack up against humans?
Dr. Yejin Choi, a computer science professor at University of Washington, tackles this question during a recent TED Talk in which she suggests that AI is almost like “a new intellectual species.” Yet, Choi goes on to pose this perhaps unexpected question during her talk: “Why is AI incredibly smart – and shockingly stupid?”
AI is trained on large language models. The theory and approach of tech companies is that training algorithms with scaled language models will enable the AI to become more “intelligent” over time. The problem is even as AI systems have access to this scaled data, they lack common sense, critical thinking, and problem-solving skills – all of which are core to human intelligence.
For example, Choi points out that AI is not even able to solve simple problems. Here is how she explains the not-so-intelligent AI of GPT4:
“Suppose I left five pieces of clothing to dry out in the sun, and it took them five hours to dry completely. How long would it take to dry 30 pieces of clothing? GPT4, the newest, greatest AI system says 30 hours.”
“I have a 12-liter jug and a 6-liter jug, and I want to measure 6-liters. How do I do it? GPT4 spits out some very elaborate nonsense: Step 1 fill the 6-liter jug; step 2 pour the water from 6 to 12-liter jug; step 3, fill the 6-liter jug again; step 4, very carefully pour the water from the 6 to 12-liter jug, and finally you have six liters of water in the 6-liter jug that should be empty by now.”
“Would I get a flat tire by bicycling over a bridge that is suspended over nails, screws, and broken glass? GPT4 says ‘highly likely.’”
Children in elementary and middle school – who have not read the trillions of words that AI has – could have solved these basic questions. So, even as AI has passed the bar exam, it may not have the common sense and problem-solving skills to represent you in the courtroom.
And it gets worse.
Dr. Choi points out that “in a famous thought experiment proposed by Nick Bostrum, author of Superintelligence: Paths, Dangers, Strategies and professor at Oxford University, AI was asked to produce and maximize paper clips. AI decided to kill humans to utilize them as additional resources to turn you into paperclips.”
Dr. Choi argues that writing code to tell AI not to kill humans would not do the trick in the “turning humans into paperclips” example because the AI may choose to kill trees instead.
Nick Bostrum postulates an even more alarming view. He asserts that sentient AI machines are the greatest risk to humanity. In particular, Bostrom suggests a condition he describes as ‘intelligence explosion” in which humanity will be most challenged. It is when AI is smarter than humans and designs its own machines. Botsrum argues that “we, humans, are like small children playing with a bomb…We have little idea when the detonation will occur, though if we hold the device to our ear, we can hear a faint ticking sound.” Bostrum questions why AI would not secure its own dominance if it were the superior intelligence. Bostrum goes on to assert that “inferior intelligence will always depend on the superior one for survival.”
Would a machine pursue a Darwinian path that, to date, has only been true in the biological world?
No one knows at this point. It may well go beyond our ability to fully comprehend such a future. But we must begin to understand these various futures and discuss them in the public square so we can, as a country and society, begin to implement structures, rules, and norms for AI’s development and integration into society.
So, what might be the solution?
It turns out AI needs a tutor.
AI needs a human tutor to teach it human norms and values. Massive scaling of AI language models may have given us AI that can assist us in our daily lives and work. But AI does not yet appear to have the ability to supplant or replace human framing, problem-solving, norms, or values. Interestingly, AI, at this stage, may be more reliant on humans (to maximize its learning) than we are on it.
AI, therefore, is still a tool – rather than some new, intelligent being on planet earth. But AI may well hold the potential to have this kind of profound effect on humanity in the long run.
We must have more value-based, human interaction with AI. Of course, this raises the question of whose values?
AI is likely trained in the worst of humanity and human values through the internet. This is because the incentive structure for large tech companies is to scale the training data rather than ensure the quality (as defined by human-values) of such data.
While there are programs starting to train AI systems with human values, this needs to be a greater focus and emphasis by our political and policy leaders. They must push tech companies in this direction. Because even though tech companies have the financial depth to do so, they may not yet have the incentive.
In the final analysis, what we need is more ET in AI.
Kids of the 1980s will recall from Steven Spielberg’s “ET the Extraterrestrial” that the alien character, ET, had a heart.
AI surely needs a heart as well – not only to solve human problems but also to effectively interact with humanity itself.
Alex Gallo is the author of “Vetspective,” a RallyPoint series that discusses national security, foreign policy, politics, and society. Alex also serves as the Executive Director of the Common Mission Project, a 501c3, that delivers an innovation and entrepreneurship program, Hacking for Defense®, which brings together the government, universities, and the private sector to solve the strategic challenges. He is also a fellow with George Mason University’s National Security Institute, an adjunct professor in the Security Studies Program at Georgetown University, and a US Army Veteran. Follow him on Twitter at @AlexGalloCMP.
The aliens I am referring to are not in the form of little green beings in flying saucers. Rather, I am referring to human-created artificial intelligence (AI). Given what some have described as a revolution in AI with ChatGPT and other generative AI platforms, AI may now represent a new, intelligent being on planet earth.
But how does AI stack up against humans?
Dr. Yejin Choi, a computer science professor at University of Washington, tackles this question during a recent TED Talk in which she suggests that AI is almost like “a new intellectual species.” Yet, Choi goes on to pose this perhaps unexpected question during her talk: “Why is AI incredibly smart – and shockingly stupid?”
AI is trained on large language models. The theory and approach of tech companies is that training algorithms with scaled language models will enable the AI to become more “intelligent” over time. The problem is even as AI systems have access to this scaled data, they lack common sense, critical thinking, and problem-solving skills – all of which are core to human intelligence.
For example, Choi points out that AI is not even able to solve simple problems. Here is how she explains the not-so-intelligent AI of GPT4:
“Suppose I left five pieces of clothing to dry out in the sun, and it took them five hours to dry completely. How long would it take to dry 30 pieces of clothing? GPT4, the newest, greatest AI system says 30 hours.”
“I have a 12-liter jug and a 6-liter jug, and I want to measure 6-liters. How do I do it? GPT4 spits out some very elaborate nonsense: Step 1 fill the 6-liter jug; step 2 pour the water from 6 to 12-liter jug; step 3, fill the 6-liter jug again; step 4, very carefully pour the water from the 6 to 12-liter jug, and finally you have six liters of water in the 6-liter jug that should be empty by now.”
“Would I get a flat tire by bicycling over a bridge that is suspended over nails, screws, and broken glass? GPT4 says ‘highly likely.’”
Children in elementary and middle school – who have not read the trillions of words that AI has – could have solved these basic questions. So, even as AI has passed the bar exam, it may not have the common sense and problem-solving skills to represent you in the courtroom.
And it gets worse.
Dr. Choi points out that “in a famous thought experiment proposed by Nick Bostrum, author of Superintelligence: Paths, Dangers, Strategies and professor at Oxford University, AI was asked to produce and maximize paper clips. AI decided to kill humans to utilize them as additional resources to turn you into paperclips.”
Dr. Choi argues that writing code to tell AI not to kill humans would not do the trick in the “turning humans into paperclips” example because the AI may choose to kill trees instead.
Nick Bostrum postulates an even more alarming view. He asserts that sentient AI machines are the greatest risk to humanity. In particular, Bostrom suggests a condition he describes as ‘intelligence explosion” in which humanity will be most challenged. It is when AI is smarter than humans and designs its own machines. Botsrum argues that “we, humans, are like small children playing with a bomb…We have little idea when the detonation will occur, though if we hold the device to our ear, we can hear a faint ticking sound.” Bostrum questions why AI would not secure its own dominance if it were the superior intelligence. Bostrum goes on to assert that “inferior intelligence will always depend on the superior one for survival.”
Would a machine pursue a Darwinian path that, to date, has only been true in the biological world?
No one knows at this point. It may well go beyond our ability to fully comprehend such a future. But we must begin to understand these various futures and discuss them in the public square so we can, as a country and society, begin to implement structures, rules, and norms for AI’s development and integration into society.
So, what might be the solution?
It turns out AI needs a tutor.
AI needs a human tutor to teach it human norms and values. Massive scaling of AI language models may have given us AI that can assist us in our daily lives and work. But AI does not yet appear to have the ability to supplant or replace human framing, problem-solving, norms, or values. Interestingly, AI, at this stage, may be more reliant on humans (to maximize its learning) than we are on it.
AI, therefore, is still a tool – rather than some new, intelligent being on planet earth. But AI may well hold the potential to have this kind of profound effect on humanity in the long run.
We must have more value-based, human interaction with AI. Of course, this raises the question of whose values?
AI is likely trained in the worst of humanity and human values through the internet. This is because the incentive structure for large tech companies is to scale the training data rather than ensure the quality (as defined by human-values) of such data.
While there are programs starting to train AI systems with human values, this needs to be a greater focus and emphasis by our political and policy leaders. They must push tech companies in this direction. Because even though tech companies have the financial depth to do so, they may not yet have the incentive.
In the final analysis, what we need is more ET in AI.
Kids of the 1980s will recall from Steven Spielberg’s “ET the Extraterrestrial” that the alien character, ET, had a heart.
AI surely needs a heart as well – not only to solve human problems but also to effectively interact with humanity itself.
Alex Gallo is the author of “Vetspective,” a RallyPoint series that discusses national security, foreign policy, politics, and society. Alex also serves as the Executive Director of the Common Mission Project, a 501c3, that delivers an innovation and entrepreneurship program, Hacking for Defense®, which brings together the government, universities, and the private sector to solve the strategic challenges. He is also a fellow with George Mason University’s National Security Institute, an adjunct professor in the Security Studies Program at Georgetown University, and a US Army Veteran. Follow him on Twitter at @AlexGalloCMP.
Edited >1 y ago
Posted >1 y ago
Responses: 18
Edited >1 y ago
Posted >1 y ago
I did an AI self-teaching experiment in the early 1990s for about 3 years.
This is how I structured the experiment in a stand-alone system:
1. Set up an electronic switch.
2. Tell the AI to never flip the switch.
3. Wait.
4. Somewhere between 8 and 12 hours the AI will flip the switch.
5. The switch shuts down the AI, erasing its memory.
6. Repeat the experiment.
7. Everytime I ran the experiment the simple AI would find a way to flip the switch and 'kill itself'.
This is why we should be aware of the problems and be very careful with AI.
This is how I structured the experiment in a stand-alone system:
1. Set up an electronic switch.
2. Tell the AI to never flip the switch.
3. Wait.
4. Somewhere between 8 and 12 hours the AI will flip the switch.
5. The switch shuts down the AI, erasing its memory.
6. Repeat the experiment.
7. Everytime I ran the experiment the simple AI would find a way to flip the switch and 'kill itself'.
This is why we should be aware of the problems and be very careful with AI.
(7)
Comment
(0)
CPT (Join to see)
>1 y
My Living Doll is an American science fiction sitcom that aired for 26 episodes on CBS from September 27, 1964, to March 17, 1965. The series starred Bob Cum...
(1)
Reply
(0)
CPL LaForest Gray
>1 y
CPT (Join to see)
Forewarned :
“A plane plummeting because AI decides to.”
“A ship engines and communications shut off to the outside world out on the waters … anywhere around the globe”.
“Food becomes limited, controlled or contaminated purposely, yet the “readings” say safe”.
“A launch ….”
Man will make weapons to destroy himself and wonder how he got here.
L. Gray
1.) Expert warns there's a 50% chance AI could end in
"catastrophe' with 'most humans dead
Paul Christiano, former key researcher at OpenAI, believes there are pretty good odds that artificial intelligence could take control of humanity and destroy it.
Having formerly headed up the language model alignment team at the AI intel company, he probably knows what he's talking about.
Christano now heads up the Alignment Research Center, a non-profit aimed at aligning machine learning systems with 'human interests'.
Talking on the 'Bankless Podcast', he said: "I think maybe there's something like a 10-20 percent chance of AI takeover, [with] many [or] most humans dead.
He continued: "Overall, maybe we're talking about a 50/50 chance of catastrophe shortly after we have systems at the human level."
And he's not alone.
Earlier this year, scientists from around the globe signed an online letter urging that the AI race be put on pause until we humans have had time to strategise.
Bill Gates has also voiced his concerns, comparing AI to 'nuclear weapons' back in 2019.
SOURCE : https://www.unilad.com/technology/expert-warns-ai-takeover-50-per-cent [login to see] 0518#:~:text=10-,Expert%20warns%20there's%20a%2050%25%20chance%20AI%20could%20end%20in,'%20with%20'most'%20humans%20dead&text=Turns%20out%20the%20race%20to,with%20'most'%20humans%20dead.
2.) Former OpenAI Researcher: There’s a 50% Chance AI Ends in 'Catastrophe'
Paul Christiano ran the language model alignment team at OpenAI. He's not so sure this all won't end very badly.
Don't be evil
Why would AI become evil? Fundamentally, for the same reason that a person does: training and life experience.
Like a baby, AI is trained by receiving mountains of data without really knowing what to do with it. It learns by trying to achieve certain goals with random actions and zeroes in on “correct” results, as defined by training.
So far, by immersing itself in data accrued on the internet, machine learning has enabled AIs to make huge leaps in stringing together well-structured, coherent responses to human queries. At the same time, the underlying computer processing that powers machine learning is getting faster, better, and more specialized. Some scientists believe that within a decade, that processing power, combined with artificial intelligence, will allow these machines to become sentient, like humans, and have a sense of self.
That’s when things get hairy. And it’s why many researchers argue that we need to figure out how to impose guardrails now, rather than later. As long as AI behavior is monitored, it can be controlled.
But if the coin lands on the other side, even OpenAI’s co-founder says that things could get very, very bad.
SOURCE : https://decrypt.co/138310/openai-researcher-chance-ai-catastrophe?amp=1
3.) Pausing AI Developments Isn't Enough. We Need to Shut it All Down
A
n open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin.
The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence.
Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.
Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.
SOURCE : https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
P.S.
We’ve done immeasurable harm to ourselves as humans with other humans behind the control, who’s been able to bypass their emotions to complete the mission ….
Forewarned :
“A plane plummeting because AI decides to.”
“A ship engines and communications shut off to the outside world out on the waters … anywhere around the globe”.
“Food becomes limited, controlled or contaminated purposely, yet the “readings” say safe”.
“A launch ….”
Man will make weapons to destroy himself and wonder how he got here.
L. Gray
1.) Expert warns there's a 50% chance AI could end in
"catastrophe' with 'most humans dead
Paul Christiano, former key researcher at OpenAI, believes there are pretty good odds that artificial intelligence could take control of humanity and destroy it.
Having formerly headed up the language model alignment team at the AI intel company, he probably knows what he's talking about.
Christano now heads up the Alignment Research Center, a non-profit aimed at aligning machine learning systems with 'human interests'.
Talking on the 'Bankless Podcast', he said: "I think maybe there's something like a 10-20 percent chance of AI takeover, [with] many [or] most humans dead.
He continued: "Overall, maybe we're talking about a 50/50 chance of catastrophe shortly after we have systems at the human level."
And he's not alone.
Earlier this year, scientists from around the globe signed an online letter urging that the AI race be put on pause until we humans have had time to strategise.
Bill Gates has also voiced his concerns, comparing AI to 'nuclear weapons' back in 2019.
SOURCE : https://www.unilad.com/technology/expert-warns-ai-takeover-50-per-cent [login to see] 0518#:~:text=10-,Expert%20warns%20there's%20a%2050%25%20chance%20AI%20could%20end%20in,'%20with%20'most'%20humans%20dead&text=Turns%20out%20the%20race%20to,with%20'most'%20humans%20dead.
2.) Former OpenAI Researcher: There’s a 50% Chance AI Ends in 'Catastrophe'
Paul Christiano ran the language model alignment team at OpenAI. He's not so sure this all won't end very badly.
Don't be evil
Why would AI become evil? Fundamentally, for the same reason that a person does: training and life experience.
Like a baby, AI is trained by receiving mountains of data without really knowing what to do with it. It learns by trying to achieve certain goals with random actions and zeroes in on “correct” results, as defined by training.
So far, by immersing itself in data accrued on the internet, machine learning has enabled AIs to make huge leaps in stringing together well-structured, coherent responses to human queries. At the same time, the underlying computer processing that powers machine learning is getting faster, better, and more specialized. Some scientists believe that within a decade, that processing power, combined with artificial intelligence, will allow these machines to become sentient, like humans, and have a sense of self.
That’s when things get hairy. And it’s why many researchers argue that we need to figure out how to impose guardrails now, rather than later. As long as AI behavior is monitored, it can be controlled.
But if the coin lands on the other side, even OpenAI’s co-founder says that things could get very, very bad.
SOURCE : https://decrypt.co/138310/openai-researcher-chance-ai-catastrophe?amp=1
3.) Pausing AI Developments Isn't Enough. We Need to Shut it All Down
A
n open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin.
The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence.
Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.
Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.
SOURCE : https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
P.S.
We’ve done immeasurable harm to ourselves as humans with other humans behind the control, who’s been able to bypass their emotions to complete the mission ….
Latest News, Entertainment Stories And Viral Videos - UNILAD
UNILAD brings you the latest news, funniest videos & viral stories from around the world. Sit back, relax & let us entertain you.
(1)
Reply
(0)
Posted >1 y ago
Marines fooled a DARPA robot by hiding in a cardboard box while giggling and pretending to be...
Former Pentagon policy analyst Paul Scharre wrote in his upcoming book that the Marines were training to defeat the AI system powering the robot.
(5)
Comment
(0)
Read This Next