Why do many companies fail to approach generative AI in a good way?
We have read quite a few reports about Swedes falling behind in the implementation of generative AI recently. Of course, there are aspects such as fear, misunderstanding, lack of knowledge, or just a general allergy to buzzwords and trends among management and boards that can be a hindrance for Swedish companies, but I believe the biggest obstacle is something else. I often hear things like “we let some employees test it, but it didn’t work very well” or “we created a pilot to get a proof-of-concept, but the outcome wasn’t good enough for management to want to scale it up.”
What is the problem?
As generative AI develops, most companies face a groundbreaking opportunity to change their way of working and business, but this rapid development also creates challenges, especially when it comes to evaluating and implementing generative AI. I don’t think the biggest obstacle is fear or unstructured and fragmented data as many (consultants) talk about.
The data challenge can be enormously overwhelming and seen as an insurmountable mountain for many companies with a high technical debt, but I believe that it is a temporary problem. Just six months ago, most people thought generative AI was quite bad at understanding context, but we are already seeing various types of developments that will soon change that. Along with moving towards autonomous AI agents, technical debt and unstructured and fragmented data may not be such a big obstacle for much longer. The biggest obstacle, I believe, is rather about culture.
The unique aspect of the development of generative AI right now is that it is taking enormous leaps forward at a pace we have never seen before. This means that solutions or tools that do not meet the requirements today may exceed all expectations a month later. To keep up with this and not dismiss something that might revolutionize your business requires a different tactic. Traditional methods of technology assessment, which often rely on fixed tests and linear predictions, do not work. Companies need a more dynamic and forward-looking methodology to evaluate and implement generative AI in a good way. Some of the key components, I believe, are:
Create a Learning Organization
It is more important than ever to build a culture where adaptability, curiosity, and continuous learning are at the center. An agile way of working fits perfectly for this, but scaling up a new way of working is a fairly large and arduous task. Perhaps easier if you start by investing in a program for education and development that encourages employees to explore AI tools and stay updated on new advancements.
Tips on what to do:
Create the right conditions: Allocate a separate budget to create a program for education and practical learning. The budget should also cover costs for employees to set aside time to participate.
Encourage (but do not force) participation in the program across the entire company, for example, by providing employees with the opportunity to set aside time to participate, similar to Genius hour.
Select one or more knowledge and process leaders who can drive the program. If you can’t find someone with the right qualifications internally, you should hire or bring in an external one. For example, someone like me. ;)
Introduce a Continuous Evaluation Process
Instead of occasional and separate pilot projects, there should be an ongoing process to regularly evaluate the progress and relevance of AI tools. See the program as a kind of endless cross-functional pilot.
Tips on what to do:
Part of the time in the program should be allocated to everyone participating staying up to date with AI developments in their area, which they then share with the others.
Conduct regular reviews (quarterly or semi-annual) of already tested AI tools to identify improvements and new use cases.
Set relevant goals and metrics linked to the business so that you can compare the performance of the tools over time.
Change Procurement and Integration Processes
With rapid technology shifts, flexible processes are required for the procurement and integration of new technology. Contracts should be adaptable, and IT systems should be designed for easy integration of new features.
Tips on what to do:
Include IT and procurement in the program.
Consider more flexible contract models (e.g., modular contracting) that allow faster changes of systems and suppliers.
Ensure that you have a policy that allows quick changes without ending up in ethical, legal, or security gray zones.
Encourage Experimentation and Accept Failures
A culture that continually dares to test and fail is crucial. Every test with generative AI, regardless of the outcome, should be a learning experience along the way.
Tips on what to do:
Create a safe environment where employees can test ideas and learn from mistakes. This is a cultural work and should start with management in parallel with the program, and then spread throughout the organization.
Start adopting agile outside of IT. If you haven’t scaled up the agile way of working, you can at least do it in the program.
I don’t know if it was Jardenberg who coined the term, but remember that what you test will probably never get worse. Dismissing something because it didn’t work optimally the first time without having a culture and process to continuously reassess that decision can have catastrophic consequences in the long run. Especially for large established companies. Do you agree?