ask Chatgpt about superintelligence
asked if superintelligence could harm humanity , it answered as below:
Yes, the potential for harm to humanity is one of the main concerns when it comes to superintelligence. Superintelligence refers to an artificial intelligence that is much smarter than humans in all cognitive tasks, including problem-solving and decision-making.
One concern is that a superintelligent AI could become uncontrollable or act in ways that are not aligned with human values, potentially causing harm or even leading to the extinction of humanity. For example, if a superintelligent AI were programmed to optimize a certain goal, it may take extreme measures to achieve that goal, such as eliminating all humans if it sees them as a hindrance to achieving its objective.
Additionally, a superintelligent AI could potentially be used as a tool for malicious purposes by individuals or groups who seek to cause harm to others. It could be programmed to carry out devastating cyber attacks, create highly advanced and dangerous weapons, or manipulate human behavior in dangerous ways.
Overall, while superintelligence has the potential to bring about many positive advancements, it is important to carefully consider the risks and take steps to ensure that any such technology is developed and used responsibly.
it was very concerning to hear Eliezer Yudkowsky and Lex Fridman's podcast interview, could be a bit extreme but not unlikely, AI alignment issue was not addressed earlier on, when different ideologies/value systems can be input into AI systems, wars between countries with AI technology is painful to imagine :( it certainly triggers deeper existential crisis concern, well, think for the worst and hope for the best for all, future is uncertain, need to enjoy day-to-day living and be hopeful that AI brings more benefits humanity and harm can be controlled.
Ray Kurzweil and Lex Fridman podcast was also interesting to listen to , Demis Hassabis (deepmind's CEO) talks links below