Design That Matters
I greatly appreciated Lyel's lecture today as well as his insights and passion for value-sensitive design. One topic that particularly interested me was how to navigate ethical situations and advocate for responsible technological development. Specifically, the domain of AI safety has been coming into focus among tech leaders, scientists, consumers, and more with the release of ChatGPT. The issue of AI governance is complex and has no international consensus, making it a dynamic and somewhat nebulous challenge. Today, President Biden signed an executive order to take a stance on this topic, requiring developers to share safety results with the government. This step towards standardized AI safety procedures addresses some consumers' fears of the risks associated with AI systems being used in big tech. Prior to this order, individual contributors had openly voiced their apprehensions. In some highly publicized cases, tech leaders resigned from their long standing positions within their organizations. Notably, Dr. Geoffrey Hinton, nicknamed the godfather of AI, left Google and remarked that he believed that Google failed to be a "proper steward" of AI and LLMs. Other pioneers such as Meridith Whittaker and Chris Wiggins share similar stories. As a new grad, one might not have the power and reputation to make a bold statement through quitting in the name of ethical advocacy. Making positive change might mean adopting a VSD framework, thinking about unintended consequences, or examining possible zones of risk.
Lyel brought up the point that technology does not exist in a vacuum-- it will exist within the mission, values, culture, and incentive structures within the organization that contains it. One of the most mainstream criticisms of technology is Facebook and Instagram's algorithms, which is their dual purpose of fostering social interaction while simultaneously generating revenue. Meta is currently being sued by 33 states for the intent to attract children and teenagers with addictive content. This issue reiterates that technology's influence might extend beyond its originally intended use, as well as the importance of advocating for technology that aligns with shared values and ethical principles. As stated by Google's former motto, "Don't be evil".