(First Things) – With the release of Blade Runner 2049, the long-awaited sequel to the 1982 classic, philosophers and bioethicists are buzzing about when non-human beings should be granted “rights.” Lorraine Boissoneault offers an interesting take in Smithsonian, exploring whether the fictional “replicants” that inhabit the dystopian world of both Blade Runner films, as well as AI computers that may soon be designed, should be considered “persons” entitled to legal rights. From “Are Blade Runner’s Replicants ‘Human’?”:
Blade Runner is only a movie and humans still haven’t managed to create replicants. But we’ve made plenty of advances in artificial intelligence, from self-driving cars learning to adapt to human error to neural networks that argue with each other to get smarter. That’s why, for [Yale philosopher Susan] Schneider, the questions posed by the film about the nature of humanity and how we might treat androids have important real-world implications.
“One of the things I’ve been doing is thinking about whether it will ever feel like anything to be an AI. Will there ever be a Rachael [the most advanced replicant]?” says Schneider, who uses Blade Runner in her class on philosophy in science fictions. This year, Schneider published a paper on the test she developed with astrophysicist Edwin Turner to discover whether a mechanical being might actually be conscious.
The test, called the AI Consciousness Test, determines whether a machine has a sense of “self” and exhibits “behavioral indicators of consciousness.” CONTINUE