Jonathan Birch, a professor of philosophy at the London School of Economics, fears that the debate about the possible sentience of future artificial intelligence (AI) could divide people into two camps. Its background is the belief of a group of scientistsAI consciousness may be possible by 2035. This was reported by various US media.
Advertisement

Just in time for the AI ​​Action Summit on November 21 and 22 in Los Angeles, which will focus on safety frameworks for AI, philosopher Birch expressed fears that whether or not AI has feelings could alienate people. . This debate was started by a group of scientists who believe that AI could show forms of emotions within the next decade.
Social consequences of AI
Defining these remains difficult and controversial even among experts. How should potential sensations like pleasure or pain be measured in AI? And even if this happens, what rights should AI be given? A similar debate revolves around the treatment and welfare of animals. Here also various cultural, religious or social interests come together.
According to Birch, companies are also not interested in dealing with the secondary and social consequences of AI. Author Patrick Butlin, a researcher at the University of Oxford, says there is a risk that AI systems could resist in dangerous ways. This justifies the slowdown in growth. However, such assessment of awareness is not currently taking place and is not being commented upon by technology companies.
Even if experts disagree about the future state of consciousness itself, philosophers’ concerns about division in society remain. The first step would be to acknowledge this problem and determine parameters against which the perception of AI can be measured. In AI Deep Dive, MIT Technology Review’s Wolfgang Styler talks about AI and consciousness.
(High)
