41
41
0
Well, it appears the aliens have landed…
The aliens I am referring to are not in the form of little green beings in flying saucers. Rather, I am referring to human-created artificial intelligence (AI). Given what some have described as a revolution in AI with ChatGPT and other generative AI platforms, AI may now represent a new, intelligent being on planet earth.
But how does AI stack up against humans?
Dr. Yejin Choi, a computer science professor at University of Washington, tackles this question during a recent TED Talk in which she suggests that AI is almost like “a new intellectual species.” Yet, Choi goes on to pose this perhaps unexpected question during her talk: “Why is AI incredibly smart – and shockingly stupid?”
AI is trained on large language models. The theory and approach of tech companies is that training algorithms with scaled language models will enable the AI to become more “intelligent” over time. The problem is even as AI systems have access to this scaled data, they lack common sense, critical thinking, and problem-solving skills – all of which are core to human intelligence.
For example, Choi points out that AI is not even able to solve simple problems. Here is how she explains the not-so-intelligent AI of GPT4:
“Suppose I left five pieces of clothing to dry out in the sun, and it took them five hours to dry completely. How long would it take to dry 30 pieces of clothing? GPT4, the newest, greatest AI system says 30 hours.”
“I have a 12-liter jug and a 6-liter jug, and I want to measure 6-liters. How do I do it? GPT4 spits out some very elaborate nonsense: Step 1 fill the 6-liter jug; step 2 pour the water from 6 to 12-liter jug; step 3, fill the 6-liter jug again; step 4, very carefully pour the water from the 6 to 12-liter jug, and finally you have six liters of water in the 6-liter jug that should be empty by now.”
“Would I get a flat tire by bicycling over a bridge that is suspended over nails, screws, and broken glass? GPT4 says ‘highly likely.’”
Children in elementary and middle school – who have not read the trillions of words that AI has – could have solved these basic questions. So, even as AI has passed the bar exam, it may not have the common sense and problem-solving skills to represent you in the courtroom.
And it gets worse.
Dr. Choi points out that “in a famous thought experiment proposed by Nick Bostrum, author of Superintelligence: Paths, Dangers, Strategies and professor at Oxford University, AI was asked to produce and maximize paper clips. AI decided to kill humans to utilize them as additional resources to turn you into paperclips.”
Dr. Choi argues that writing code to tell AI not to kill humans would not do the trick in the “turning humans into paperclips” example because the AI may choose to kill trees instead.
Nick Bostrum postulates an even more alarming view. He asserts that sentient AI machines are the greatest risk to humanity. In particular, Bostrom suggests a condition he describes as ‘intelligence explosion” in which humanity will be most challenged. It is when AI is smarter than humans and designs its own machines. Botsrum argues that “we, humans, are like small children playing with a bomb…We have little idea when the detonation will occur, though if we hold the device to our ear, we can hear a faint ticking sound.” Bostrum questions why AI would not secure its own dominance if it were the superior intelligence. Bostrum goes on to assert that “inferior intelligence will always depend on the superior one for survival.”
Would a machine pursue a Darwinian path that, to date, has only been true in the biological world?
No one knows at this point. It may well go beyond our ability to fully comprehend such a future. But we must begin to understand these various futures and discuss them in the public square so we can, as a country and society, begin to implement structures, rules, and norms for AI’s development and integration into society.
So, what might be the solution?
It turns out AI needs a tutor.
AI needs a human tutor to teach it human norms and values. Massive scaling of AI language models may have given us AI that can assist us in our daily lives and work. But AI does not yet appear to have the ability to supplant or replace human framing, problem-solving, norms, or values. Interestingly, AI, at this stage, may be more reliant on humans (to maximize its learning) than we are on it.
AI, therefore, is still a tool – rather than some new, intelligent being on planet earth. But AI may well hold the potential to have this kind of profound effect on humanity in the long run.
We must have more value-based, human interaction with AI. Of course, this raises the question of whose values?
AI is likely trained in the worst of humanity and human values through the internet. This is because the incentive structure for large tech companies is to scale the training data rather than ensure the quality (as defined by human-values) of such data.
While there are programs starting to train AI systems with human values, this needs to be a greater focus and emphasis by our political and policy leaders. They must push tech companies in this direction. Because even though tech companies have the financial depth to do so, they may not yet have the incentive.
In the final analysis, what we need is more ET in AI.
Kids of the 1980s will recall from Steven Spielberg’s “ET the Extraterrestrial” that the alien character, ET, had a heart.
AI surely needs a heart as well – not only to solve human problems but also to effectively interact with humanity itself.
Alex Gallo is the author of “Vetspective,” a RallyPoint series that discusses national security, foreign policy, politics, and society. Alex also serves as the Executive Director of the Common Mission Project, a 501c3, that delivers an innovation and entrepreneurship program, Hacking for Defense®, which brings together the government, universities, and the private sector to solve the strategic challenges. He is also a fellow with George Mason University’s National Security Institute, an adjunct professor in the Security Studies Program at Georgetown University, and a US Army Veteran. Follow him on Twitter at @AlexGalloCMP.
The aliens I am referring to are not in the form of little green beings in flying saucers. Rather, I am referring to human-created artificial intelligence (AI). Given what some have described as a revolution in AI with ChatGPT and other generative AI platforms, AI may now represent a new, intelligent being on planet earth.
But how does AI stack up against humans?
Dr. Yejin Choi, a computer science professor at University of Washington, tackles this question during a recent TED Talk in which she suggests that AI is almost like “a new intellectual species.” Yet, Choi goes on to pose this perhaps unexpected question during her talk: “Why is AI incredibly smart – and shockingly stupid?”
AI is trained on large language models. The theory and approach of tech companies is that training algorithms with scaled language models will enable the AI to become more “intelligent” over time. The problem is even as AI systems have access to this scaled data, they lack common sense, critical thinking, and problem-solving skills – all of which are core to human intelligence.
For example, Choi points out that AI is not even able to solve simple problems. Here is how she explains the not-so-intelligent AI of GPT4:
“Suppose I left five pieces of clothing to dry out in the sun, and it took them five hours to dry completely. How long would it take to dry 30 pieces of clothing? GPT4, the newest, greatest AI system says 30 hours.”
“I have a 12-liter jug and a 6-liter jug, and I want to measure 6-liters. How do I do it? GPT4 spits out some very elaborate nonsense: Step 1 fill the 6-liter jug; step 2 pour the water from 6 to 12-liter jug; step 3, fill the 6-liter jug again; step 4, very carefully pour the water from the 6 to 12-liter jug, and finally you have six liters of water in the 6-liter jug that should be empty by now.”
“Would I get a flat tire by bicycling over a bridge that is suspended over nails, screws, and broken glass? GPT4 says ‘highly likely.’”
Children in elementary and middle school – who have not read the trillions of words that AI has – could have solved these basic questions. So, even as AI has passed the bar exam, it may not have the common sense and problem-solving skills to represent you in the courtroom.
And it gets worse.
Dr. Choi points out that “in a famous thought experiment proposed by Nick Bostrum, author of Superintelligence: Paths, Dangers, Strategies and professor at Oxford University, AI was asked to produce and maximize paper clips. AI decided to kill humans to utilize them as additional resources to turn you into paperclips.”
Dr. Choi argues that writing code to tell AI not to kill humans would not do the trick in the “turning humans into paperclips” example because the AI may choose to kill trees instead.
Nick Bostrum postulates an even more alarming view. He asserts that sentient AI machines are the greatest risk to humanity. In particular, Bostrom suggests a condition he describes as ‘intelligence explosion” in which humanity will be most challenged. It is when AI is smarter than humans and designs its own machines. Botsrum argues that “we, humans, are like small children playing with a bomb…We have little idea when the detonation will occur, though if we hold the device to our ear, we can hear a faint ticking sound.” Bostrum questions why AI would not secure its own dominance if it were the superior intelligence. Bostrum goes on to assert that “inferior intelligence will always depend on the superior one for survival.”
Would a machine pursue a Darwinian path that, to date, has only been true in the biological world?
No one knows at this point. It may well go beyond our ability to fully comprehend such a future. But we must begin to understand these various futures and discuss them in the public square so we can, as a country and society, begin to implement structures, rules, and norms for AI’s development and integration into society.
So, what might be the solution?
It turns out AI needs a tutor.
AI needs a human tutor to teach it human norms and values. Massive scaling of AI language models may have given us AI that can assist us in our daily lives and work. But AI does not yet appear to have the ability to supplant or replace human framing, problem-solving, norms, or values. Interestingly, AI, at this stage, may be more reliant on humans (to maximize its learning) than we are on it.
AI, therefore, is still a tool – rather than some new, intelligent being on planet earth. But AI may well hold the potential to have this kind of profound effect on humanity in the long run.
We must have more value-based, human interaction with AI. Of course, this raises the question of whose values?
AI is likely trained in the worst of humanity and human values through the internet. This is because the incentive structure for large tech companies is to scale the training data rather than ensure the quality (as defined by human-values) of such data.
While there are programs starting to train AI systems with human values, this needs to be a greater focus and emphasis by our political and policy leaders. They must push tech companies in this direction. Because even though tech companies have the financial depth to do so, they may not yet have the incentive.
In the final analysis, what we need is more ET in AI.
Kids of the 1980s will recall from Steven Spielberg’s “ET the Extraterrestrial” that the alien character, ET, had a heart.
AI surely needs a heart as well – not only to solve human problems but also to effectively interact with humanity itself.
Alex Gallo is the author of “Vetspective,” a RallyPoint series that discusses national security, foreign policy, politics, and society. Alex also serves as the Executive Director of the Common Mission Project, a 501c3, that delivers an innovation and entrepreneurship program, Hacking for Defense®, which brings together the government, universities, and the private sector to solve the strategic challenges. He is also a fellow with George Mason University’s National Security Institute, an adjunct professor in the Security Studies Program at Georgetown University, and a US Army Veteran. Follow him on Twitter at @AlexGalloCMP.
Edited >1 y ago
Posted >1 y ago
Responses: 12
Edited >1 y ago
Posted >1 y ago
I did an AI self-teaching experiment in the early 1990s for about 3 years.
This is how I structured the experiment in a stand-alone system:
1. Set up an electronic switch.
2. Tell the AI to never flip the switch.
3. Wait.
4. Somewhere between 8 and 12 hours the AI will flip the switch.
5. The switch shuts down the AI, erasing its memory.
6. Repeat the experiment.
7. Everytime I ran the experiment the simple AI would find a way to flip the switch and 'kill itself'.
This is why we should be aware of the problems and be very careful with AI.
This is how I structured the experiment in a stand-alone system:
1. Set up an electronic switch.
2. Tell the AI to never flip the switch.
3. Wait.
4. Somewhere between 8 and 12 hours the AI will flip the switch.
5. The switch shuts down the AI, erasing its memory.
6. Repeat the experiment.
7. Everytime I ran the experiment the simple AI would find a way to flip the switch and 'kill itself'.
This is why we should be aware of the problems and be very careful with AI.
(6)
Comment
(0)
CPT (Join to see)
>1 y
Terminator (1984) - There is a Storm Coming (in)
First Terminator's final scene ( with an edition a little different in the end). A small mexican boy announces that a storm is coming. All we know that Sarah...
(1)
Reply
(0)
CPT (Join to see)
>1 y
My Living Doll is an American science fiction sitcom that aired for 26 episodes on CBS from September 27, 1964, to March 17, 1965. The series starred Bob Cum...
(0)
Reply
(0)
Read This Next