Anyway, fascinating.
Embracing the Augmentative AI vision
I haven’t done enough research (yet) to say how good the model is and for what use cases. Many of my IDC colleagues develop thought research and collect in-depth data how generative AI will affect businesses and consumers.
What I’m thinking about are the societal implications of generative artificial intelligence. This was sparked yesterday during our first meeting with the IDC Government Xchange 2023 Advisory Board. Advisory board member Gwendolyn Carpenter, who has children at the school, said she has heard of students using it to cheat their homework.
My colleague Matt Ledger has already written one quick take and on this issue. As Matt mentioned in his article, there are a range of opinions on this, from schools who believe that ChatGPT can be very valuable as a learning tool, to those who are unsure of the impact and have temporarily forbidden use, to educators who believe generative AI could make them redundant the ability to write, to learn, and ultimately to think. This, paradoxically, could hinder our ability to invent brilliant new tools, such as ChatGPT itself.
I’m not a Luddite. I don’t think we should stop progress. But I think generative AI is a great example of ongoing debate about whether we should design AI that can replace human skills versus AI that can augment human skills.
For example, I don’t want generative AI to replace my writing, just because it’s much faster and more elegant than I am at synthesizing the available knowledge. I’m having so much fun voicing my opinions in this blog because I’m kind of creating it as I write it!
But I sure would love to have a tool to critique my writing. A tool that might, for example, highlight where my piece is biased or where I might consider additional sources of data and literature to enrich my perspective. Sort of a much smarter version of the spell checker that tells me if there’s a typo or if I’ve used punctuation incorrectly or if I’ve used too many passive forms. This augmentative AI tool would push my brain to think more, not less. And I could still make my own choices about whether or not to apply the advice.
Policymakers need to think about how they can shape the new norms to maximize the benefits and address the risks of AI. For example, by recommending (or mandating) a machine-readable tag that helps recognize whether a piece of content is generated by AI, for example if government regulated certifications. But regulation is not enough.
If this does not happen, AI will not live up to the high expectations that it can be a positive force in the future. In fact, according to us Future Business Resilience and Spending Survey (Wave 11, December 2022), only 25% of government executives worldwide believe that the promise of AI has fully lived up to their organization’s expectations.
The future of generative AI (and the AI market in general) will depend on whether users and providers embrace the narrative of human growth, both in the B2B and B2C worlds. We need to ask ourselves what kind of AI solutions we want – solutions that replace humans or augment humans. And then design and engineer them in a way that reflects that purpose.
I look forward to discussing more about the power of innovation and how we can use it at scale to have a positive and ethical impact on society at our level. Government Xchange.