• A Responsible Systems Approach to Generative AI

    A Responsible Systems Approach to Generative AI

    Written by:

    Dave Graham
    Chair, Responsible System Working Group

     

    Generative Artificial Intelligence (AI) has become a technology tour de force in recent years, allowing the use of natural language to "create" and lending credence to the idea of an emerging Artificial General Intelligence (AGI). Image generators like OpenAI's Dall-E 2, StableDiffusion, and MidJourney have shown potential for a human-machine partnership that can drive artistic expressions and highlight the dangers of unfettered access to content assumed to be protected by privacy regulations or copyrights.

    Within the past several months, OpenAI's ChatGPT has emerged with resounding popularity, highlighting the opportunity that Large Language Models (LLMs), as part of Generative AI, provide to organizations. As a stepping stone towards AGI, it offers reasonable accommodation, both as a sandbox for development and a milestone for possibilities.

    However, ChatGPT isn't perfect and should be approached with a reasonable measure of caution as organizations evaluate its usefulness. Looking through a Responsible Systems lens, there are several different areas where further evaluation and refinement are needed:

    1. Examine the underlying LLMs used for training and inference. As language models develop further, having a clear view of the source of these data sets will limit potential harm toward marginalized populations while increasing the trustworthiness of the responses that ChatGPT-like systems provide. Similarly, having an open and auditable model for data inclusion that enables trust and transparency regarding the underlying data models is a step in the right direction for ensuring responsible data usage (another core theme of the Responsible Computing consortium).
    2. Clarify Use Cases. ChatGPT isn't replacing Robotic Process Automation (RPA) or responsive chatbots. A specific set of systems and processes (even parameters) require more tightly constrained responses. For example, ChatGPT is exceptionally unprepared to provide brief or accurate answers to legal questions and, in demonstrated circumstances, has synthesized rules and regulations that don't currently exist. This approach complicates responsible usage and heightens the negative impact should it be used without careful application. In other situations, such as subject matter prompting, ChatGPT is quite helpful, providing initial seed ideas that you can develop into final deliverables.
    3. Understand Adversarial Usage and "Truth." You can manipulate the underpinning mechanisms of conversational artificial intelligence to cause harm, whether social, emotional, environmental or otherwise. I cannot understate the impact on marginalized communities in the wrong hands and unfettered from reasonable regulation and curation. "Truth" in these systems is an abstract concept that isn't regulated. As such, any answer delivered can be presumed as fiction insofar as the entirety of the toolchain isn't transparent. Additionally, adversarial usage of ChatGPT is still developing though many efforts are underway to exploit context and content for indeterminate purposes.
    4. Understand ESG/SDG Impact. You can argue that conversational AI and their underpinning LLMs can be taxing on computational resources, thus impacting an organization's environmental, social, and governance (ESG) guidelines and running afoul of stated organizational sustainability and development goals (SDGs). Removing the locus of control from on-premise to hosted may assuage the localized impact, but this shifts the burden to other entities. Ensuring that the entire system of use, from physical infrastructure to the code compiled, is designed to promulgate sustainable resource usage is a step towards advancing responsible systems and computing worldviews. Additionally, understanding how a conversational AI can negatively impact the organization's members causing more harm than good is paramount to ensuring that appropriate use cases and protections are in place.

    As with any emerging technology and early use cases, our goal is to focus on the potential for good to improve those aspects that most impact the systems in which we operate (social, organizational, etc.). Within the Responsible Systems workgroup, we're focused on understanding what it means to build responsible systems that advance society's technological foundations while simultaneously ensuring humanity's safety and care. We're excited at the opportunities that ChatGPT and other generative AI models bring and look forward to working together to bring about positive change.