AI transformation is everywhere. While digital transformation focuses on the nexus of people and technology, AI transformation emphasises the amalgam of automation and augmentation to optimize, and exponentially scale performance.

Member states of the EU have now agreed on an overarching framework for a new AI Act. The flagship legislation makes the EU a forerunner in setting up clear rules for the use of AI. These AI guardrails were initiated in the first draft of its rule book in 2021 when apprehensions about opaque algorithms surfaced.

The new AI Act, the most ambitious set of standards anywhere, will be designed to mitigate unambiguous dangers from specific AI functions based on the level of risk along a continuum from low to unacceptable. The rules will touch on foundation models of AI that now constitute the undercarriage of general-purpose AI Services.

However, a consensus on foundation paradigms will remain contentious across the block of EU states. Many are hoping to opt for self-regulation. This will allow indigenous European generative AI companies to compete with portals that have first mover advantage.

It is worrisome that foundation models could be used to supercharge disinformation. Additionally, the fuzziness surrounding the data lakes and libraries that were used to train present models is unsettling.

Under the new provisions, technology companies doing business inside the EU will have to disclose the data used to train the AI systems, and must thoroughly test models, especially those that can cause harm to health, safety, fundamental rights, the environment, democracy, and the rule of law. There will be a ban on social scoring, and the use of AI to manipulate user vulnerabilities.

The new AI Act will bar indiscriminate scraping of images from the internet, or security footage to create facial recognition data lakes. However, the use of real-time facial recognition by law enforcement, and biometrics to investigate acts of terrorism and violent crimes will be exempt.

Any breach of the rules by a technology company can result in a fine of up to seven per cent of global revenue depending on the violation, and the size of the firm.

Recognizing the threat to privacy and democracy posed by some models, co-legislators have agreed to prohibit: (1) biometric categorization systems that use sensitive characterises like race, sexual orientation, faith or none at all, political persuasion, and philosophical beliefs, (2) emotion recognition in education institutions and in the workplace, (3) social scoring using personal characteristics and behaviour, (4) the circumvention of human free will, and (5) the use of AI to exploit vulnerable groups due to their age, financial status or frailties.

Remote Biometric Identification systems (RBI) will benefit from exemptions that will allow law enforcement to use these systems in publicly accessible spaces subject to judicial authorization and for a narrow list of crimes. “Post-remote” RBI will be used strictly in the targeted search for a person convicted or suspected of having committed a serious crime such as terrorism, murder, kidnapping, rape, robbery, participation in a criminal organization, environmental crime, sexual exploitation, abduction, and human trafficking.

The moment to ask different and better questions about how the state and companies will work to solve bigger problems in new ways has arrived. The time has arrived to re-think and re-make models for continuous transformation.

Not only must software solutions be Open Source but organizations themselves must be re-wired, to be Open and Flat. Only then will IT be instrumental in driving change to achieve growth and performance within each function, and line of business—and across the enterprise.

Schools, universities, and businesses must re-think, re-wire, and re-make themselves by allowing people to nurture the capabilities and skills to create the results they truly desire. This is achieved by nurturing new and expansive patterns of thinking, feeding off the flux all around, and by continuously learning to see the whole together.

Within this frame, Generative AI is not just another technology to incorporate into the technology stack. It is the next iteration of business—an industrial revolution—that transcends and builds on the internet and mobile revolutions. On the one hand, it can automate work or make it better, faster, or more scalable, while on the other it invites creative leaders to re-think what a business and even a school needs to resemble in 2030, and to re-make, and re-wire the organization.

Generative AI, spatial computing, edge, and quantum technologies will become more visible. These changes will force schools and businesses to react to development trends.  Schools and businesses must thoughtfully future-proof the function of the organization by being pre-emptive. They must become technology observatories not reactive storehouses of forlorn tools and ideas.

CEOs must now task managers to develop an AI strategy to define not only the Digital and AI transformations but also – business transformation. IT must ensconce itself inside business and operational transformation teams.

For countries and companies to succeed in the Age of AI, they will need to master problem-framing and problem-finding for continuous innovation. Problem finding requires visioning exercises to unveil what is missing. Problem framing aligns squads into sprints to probe unruly obstacles and issues across departments, and to articulate and prioritize flexible actions.