Operational-level case studies leveraging Generative AI (GenAI) for enhanced cost-efficiencies and operational performance abound. But there is little big-picture clarification regarding its top-line implications for C-suite leaders.

In this blog, we summarize the definitive characteristics of GenAI, how they change existing strategic, operational, and ethical paradigms, and the opportunities and risks of the emerging technology for businesses.

The “generative” edge: what makes GenAI different?

Where conventional AI/ML programs employ computation power to aid decision-making, forecasting, and modelling, GenAI generates new, original content based on large data sets. 4 critical differences set it apart from older AI/ML technologies:

  • GenAI’s 3-stage content generation process:
    • Inputs: Captured datasets are fed into the program for processing
    • Learning and processing: By identifying common patterns, the program learns from the data and generates content accordingly
    • Outputs: Generated content is evaluated against command parameters, capturing adjustments for future generation
  • GenAI learns from unstructured data. GenAI absorbs vast quantities of raw data and identifies patterns to guide content generation.

Wavestone

Our team is a blend of high-quality talent from all levels who can tackle your most complex issues with a fresh approach. With a globally connected network of 4,000 employees, Wavestone is designed to help you get results. All our consultants thrive on complex challenges, enjoy blazing new trails, and are committed to your organization’s success.

  • GenAI can learn unsupervised. Its lack of reliance on structured inputs enables it to process and learn with minimal intervention.
  • GenAI consumes more power. The reliance on enormous data volumes requires intense levels of computation power to function.

These characteristics present radically different security risks, value propositions, and ethical issues compared to older AI/ML applications.

Maximizing business value with GenAI

Early adopters are deploying GenAI to cut costs, accelerate processes, enhance productivity, and pinpoint consumer behavior. Examples include:

  • Content generation. GenAI significantly accelerates production of content assets like blogs, social posts, and emails.
  • Personalization. From search engine optimization (SEO) to adaptive product descriptions and chatbots, GenAI can tailor responses to suit customer behavior.
  • Strategic planning. Techniques like “brand breaking” enable businesses to experiment with brand identities, supporting strategy formulation.
  • Code generation. GenAI can optimize process speeds and streamline cost-efficiencies by building and maintaining code generation tools.

But GenAI capabilities come with unique risks, and businesses should prepare for:

  • Accuracy and reliability errors. GenAI programs can make factual mistakes due to erroneous input data.
  • Emergent technical competencies. Specialists with GenAI operating expertise will be needed to design, operate, and maintain the platform.
  • Staff training initiatives. GenAI cannot create content beyond its input datasets, and will still require staff to direct the program.

Above all, GenAI objectives and use cases must be aligned with general business strategy to achieve success. Select use cases that help define the role, scope, and type of GenAI output your organization needs to add value.

Securing the GenAI learning model

Both AI and GenAI software run on servers and rely on large databases, and thus retain traditional vulnerabilities like perimeter breaches and data leaks. However, GenAI’s 3-stage learning process creates unique potential attack vectors:

  • Attacks on inputs with “data poisoning”. GenAI programs are heavily reliant on their datasets. Contaminating them with biased or unbalanced inputs sabotages the program, preventing effective results.
  • Reverse-engineering attacks. Attackers can triangulate a program’s algorithmic rules by questioning the program directly. Interactive programs like chatbots are especially vulnerable.
  • Inducing output errors with “evasion”. The program’s output can be manipulated by introducing errors in its process (e.g., pasting stickers on road signs to confuse an autonomous driving AI’s recognition system).

Businesses can implement the following countermeasures to fortify AI processes and defenses:

  • Build AI security capabilities across the 3 learning stages, such as:
    • Filtering datasets for contamination
    • Adversarial learning models
    • Defensive distillation to evaluate processes
    • Active monitoring and moderation of outputs
  • Create a GenAI scoring system and adaptation-to-risk analysis based on trust criteria:
    • The program’s intended scope of use
    • Selected AI learning and processing models
    • Clearly defined target datasets for inputs
    • Precise process tasks and desired outputs
  • Build security evaluation and audit capabilities. Internal “AI red teams” systematically audit and test the effectiveness and efficiency of implemented security measures.
  • Appoint dedicated monitoring teams focused on AI developments: emerging threats, legislation, and bans. Collaboration with data science teams can improve future calibration of security features.

Formulating cohesive data and AI ethics

Modern data capabilities are quickly becoming one of the most popular technology developments, with generative AI gaining widespread attention from venture capitalists, entrepreneurs, executives, and the general public. At the same time, concerns are being raised by both technology insiders and lawmakers about potential impacts. Lawmakers may be recognizing that regulations and oversight were slow in previous rapid technology advancement – such as social media and cryptocurrency – and may try to respond faster with AI.

There is great potential for exciting applications and it will also be important to consider the ethical implications and ensure that these technologies are used in responsible and beneficial ways.

Wavestone’s 2024 survey of Data and Analytics Leadership Executive Survey 2024 revealed organizations are falling short in their commitment to data and AI ethics.

Less than 50% report they have well-established data and AI ethics governance and practices. Only 23.8% of executives believe the industry has done enough to ensure responsible data and AI ethics standards.

We suggest focusing on three areas to start developing a unified ethical policy and associated governance program:

  • The ethics strategy. Developing an ethics strategy will help guide an organizations approach, including whether to adopt a defensive (compliance-focused) or offensive (champion-focused) posture to GenAI. Some key questions to consider:
    • How does Data & AI ethics align with your corporate values?
    • What are your competitors doing?
    • What do your customers think?
  • Ownership and roles. Determine who is driving the ethics strategy and include a broad set of stakeholders across the organization. Disseminate responsibilities and best practices transparently to foster trust with customers, employees, and stakeholders.
  • Governance practices. One of the first considerations for governing ethical use of Data & AI is if you need to create a new policy, framework or governance model, or should you use existing tools and integrate Data & AI ethics into those existing policies and frameworks. The approach will vary depending on:
    • The role of Data & AI in your organization
    • The maturity of existing policies, frameworks and governance processes
    • The potential scale and impact of both benefits and risks to the organization

Building the cross-functional teams, tools, and skills to manage GenAI deployments is complex, and demands cohesion from every business function. Consult expert advisory for detailed guidance on integrating your GenAI efforts and leveraging its advantages to the fullest.

Have a question? Just Ask.


Have questions about making Generative AI work effectively for your business? Contact a Wavestone data and AI expert for bespoke guidance.

LET'S TALK AI