The implications of AI on a Center of Excellence

Elements cloud
26 Oct 202324:21

Summary

TLDRThe video script delves into the rapid rise of AI, especially ChatGPT, and its potential to revolutionize various industries. It emphasizes the importance of prompt engineering, data quality, and architectural considerations for organizations to harness AI's full potential. The speaker uses analogies to illustrate the early stages of AI development and warns against underestimating its future capabilities. The script also highlights the need for enhanced skills, such as prompt engineering, data governance, and business analysis, to leverage AI effectively. It encourages establishing centers of excellence to foster innovation, collaboration, and best practices within organizations, enabling them to identify AI's sweet spots and maximize its benefits.

Takeaways

  • πŸ˜€ GPT (Generative Pre-trained Transformer) models like ChatGPT have brought AI capabilities to the forefront, generating human-like text based on prompts.
  • πŸš— We are still in the early stages of AI capabilities, like driving a Formula One car compared to a regular car.
  • 🧾 Prompts need to be more explicit and detailed to get better results from GPT models. Prompt engineering is a new skill to develop.
  • 🧩 GPT models excel at solving puzzles and reasoning based on provided information, rather than using general knowledge.
  • πŸ‘¨β€πŸ’» AI highlights the importance of skills we should already have, like good business analysis, data quality, and documentation standards.
  • πŸ•΅οΈβ€β™€οΈ AI forces us to be more disciplined in areas like governance, change control, and metadata management due to added risks.
  • πŸ‘©β€πŸ« AI creates a need for new skills like prompt engineering, but also emphasizes improving existing skills like data governance and architecture.
  • πŸš€ AI can be a catalyst for getting buy-in from executives to fund projects and improve areas we've neglected in the past.
  • πŸ† Centers of Excellence (CoEs) should act as innovation hubs, bringing together people passionate about AI and exploring its use cases.
  • 🎯 Organizations should focus on finding the sweet spots where AI can provide significant ROI, easy adoption, and consistent results.

Q & A

  • What is the main topic discussed in the transcript?

    -The main topic discussed is the impact of AI and specifically GPT (Generative Pre-trained Transformer) on businesses, and the skills and processes organizations need to adopt to leverage AI effectively.

  • Why did ChatGPT gain such rapid adoption with 100 million users in 2 months?

    -ChatGPT gained rapid adoption because it was easy to use (just type in a prompt) and the results were staggeringly good, allowing users to generate text like song lyrics and content that seemed beyond current capabilities.

  • What analogy is used to describe the current stage of GPT development?

    -The analogy of a car is used. ChatGPT is like a high-performance Porsche GT3 on a track, which gives a glimpse of the capabilities of a Formula 1 car, but doesn't represent the full potential of AI, which is still in its very early stages like a flip phone compared to an iPhone.

  • What are some key skills organizations need to develop to leverage GPT effectively?

    -Organizations need to develop skills in prompt engineering (crafting effective prompts), data quality, business analysis, following industry standards for documentation, understanding system architecture, data governance, and change management.

  • Why is prompt engineering important for GPT?

    -Prompt engineering is important because longer, more explicit prompts are needed to get better results from GPT. Prompts can be thought of as code that needs to be monitored, refined, and managed for dependencies, just like traditional code.

  • What are some use cases where GPT excels?

    -GPT is good at solving puzzles and generating text when given all the puzzle pieces (context and data). It performs well in areas like generating code, interpreting legal documents, and writing lyrics because it has ingested large amounts of relevant data.

  • What is the role of a Center of Excellence (CoE) in the context of AI adoption?

    -A CoE acts as an innovation hub, bringing together passionate individuals and teams to collaborate on AI initiatives, find use cases with strong ROI and ease of adoption, and ensure consistent and reliable results.

  • How does AI highlight the importance of existing best practices?

    -AI punishes mediocrity and exposes weaknesses in areas like data quality, business analysis, documentation, and architecture. It forces organizations to adopt disciplines they may have neglected in the past due to the increased risks and potential impact of AI on business operations.

  • What are some key pillars or skills needed for a strong AI Center of Excellence?

    -Key pillars or skills include vision, leadership, governance, change control, methodology, standards, metadata management, architecture, security, change management, project management, tooling, and innovation.

  • How can AI be used as a catalyst for organizational change?

    -AI can be used as a catalyst for organizational change by highlighting the need for improvements in areas like documentation, architecture, and data governance. It provides an opportunity to get buy-in and funding from executives to address these issues, which were previously overlooked or underfunded.

Outlines

00:00

πŸš€ Explosive Growth and Potential of AI

The speaker talks about the rapid growth of AI, particularly GPT, which gained 100 million users in just two months. GPT's ability to generate realistic text outputs, such as lyrics for a country song, has brought AI to the forefront. The speaker uses the analogy of driving a supercar (GPT) versus a Formula 1 car (the full potential of AI) to illustrate that we've only scratched the surface of AI's capabilities. The current perception of AI is based on experiences with ChatGPT, which is like a flip phone compared to the iPhone of the future.

05:00

πŸ› οΈ Prompt Engineering and AI as a Tool

The speaker highlights the importance of prompt engineering, which involves crafting effective prompts to get the most out of AI models like GPT. Prompts need to be longer and more explicit, with detailed context to get better results. AI excels at solving puzzles when provided with the necessary pieces of information. The speaker emphasizes that AI should be used as a tool, not a replacement for human expertise. AI can generate convincing but potentially incorrect answers, known as "hallucinations." Proper delegation and validation of AI outputs are crucial. The speaker predicts the need for new skills like prompt engineering, rather than entirely new job roles.

10:01

🎯 Importance of AI Highlighting Existing Best Practices

The speaker discusses how AI is highlighting the importance of existing best practices that organizations may have overlooked or undervalued. AI's performance is heavily dependent on the quality of data, business analysis, documentation, and architecture. AI's ability to read and process this information forces organizations to improve their standards and follow industry best practices, such as using the Universal Process Notation for process mapping. The speaker suggests using AI as a catalyst to get executive buy-in and funding for improvements in these areas.

Mindmap

Keywords

πŸ’‘GPT (Generative Pre-trained Transformer)

GPT refers to a series of AI language models developed by OpenAI, with capabilities ranging from text generation to language understanding. In the video's context, GPT's significance lies in its ability to quickly generate coherent, contextually relevant text based on prompts, illustrating a leap in AI's accessibility and potential applications. The speaker highlights GPT's rapid user adoption and its intuitive interface, allowing users to generate creative content, such as song lyrics, with ease. This underscores GPT's role in democratizing AI use and sparking widespread interest in its capabilities.

πŸ’‘ChatGPT

ChatGPT is a specific application of the GPT model focused on generating human-like text in a conversational format. The script emphasizes ChatGPT's role in bringing GPT's potential to a broad audience, demonstrating how natural language processing can facilitate interactions that feel conversational and intuitive. Through examples like prompt engineering and solving puzzles, ChatGPT is portrayed as a tool that can perform complex tasks, from generating text based on detailed instructions to reasoning through problems given a set of conditions.

πŸ’‘Prompt Engineering

Prompt engineering is the practice of crafting inputs (prompts) designed to elicit specific outputs from AI models like GPT. The video script explores this concept extensively, suggesting that effective prompts can significantly enhance the utility and accuracy of GPT's outputs. It involves not only the formulation of questions or commands but also the strategic structuring of these inputs to achieve desired results. Examples include generating business documentation or performing specific tasks like writing termination letters, highlighting the importance of detailed, clear prompts in leveraging AI capabilities.

πŸ’‘Car Analogy

The car analogy is used to contextualize the current state of AI and GPT technology relative to its potential. Comparing non-users to driving a Honda Accord, GPT users to driving a Porsche GT3, and the yet-unrealized potential of AI to the experience of a Formula One driver illustrates the vast spectrum of AI's capabilities and the stages of its accessibility and performance. This analogy serves to emphasize that while current applications like ChatGPT offer a glimpse into AI's potential, the future possibilities are as different and more advanced as a Formula One car is to a standard vehicle.

πŸ’‘Use Cases

Use cases refer to the specific scenarios or problems to which AI and GPT can be applied effectively. The video script discusses exploring and identifying successful applications of GPT, emphasizing the importance of understanding where AI can provide the most value. This involves recognizing tasks that benefit from AI's reasoning and generative capabilities, such as document generation or puzzle solving, and integrating AI into workflows in a way that enhances productivity and problem-solving.

πŸ’‘AI Skills

AI skills encompass the knowledge and abilities required to effectively use AI technologies like GPT. The video underscores the necessity of developing these skills across organizations, not as specialized roles but as competencies integral to various job functions. This includes the ability to craft effective prompts, validate AI-generated outputs, and understand the applications and limitations of AI tools. The script advocates for AI literacy as essential for leveraging AI technologies to their fullest potential.

πŸ’‘Hallucination

In the context of AI, 'hallucination' refers to instances where a model generates incorrect or nonsensical information as if it were factual. The video script touches on the importance of being aware of this tendency in AI outputs, underscoring the need for human oversight in validating and contextualizing AI-generated content. This concept illustrates a critical challenge in using AI: ensuring reliability and accuracy in its outputs.

πŸ’‘Metadata

Metadata refers to data that provides information about other data, which is critical for organizing, managing, and understanding data within systems. The script highlights the role of metadata in enhancing AI applications, such as by informing AI-generated user stories or test scripts. Effective management and documentation of metadata are portrayed as essential for leveraging AI in tasks like process automation, illustrating the intersection between AI capabilities and data management practices.

πŸ’‘Center of Excellence (CoE)

A Center of Excellence refers to a team or entity within an organization dedicated to promoting excellence and innovation in a specific area, such as AI or Salesforce usage. The script discusses the CoE in the context of guiding, standardizing, and innovating AI practices within organizations. It is portrayed as a hub for collaboration, knowledge sharing, and governance, ensuring that AI technologies are used effectively and ethically across different departments and projects.

πŸ’‘Innovation

Innovation, within the script, is tied closely to the exploration and implementation of AI in solving new or existing problems in ways that significantly improve efficiency, effectiveness, or capabilities. The video emphasizes the role of innovation in identifying and capitalizing on AI's potential use cases, advocating for a proactive and open approach to experimenting with AI technologies. This includes fostering an environment where employees are encouraged to share ideas and experiences with AI, promoting a culture of continuous improvement and adaptation.

Highlights

GPT reached 100 million users in two months due to its ease of use and impressive results in generating content like song lyrics.

ChatGPT is compared to driving a Porsche GT3 to illustrate the advanced capabilities beyond traditional tools, but the complexity of AI like a Formula One car's controls highlights the early stage we are at with GPT.

The potential of GPT is often underestimated when only seen through the limited experience with chat interfaces.

Prompt engineering emerges as a crucial skill for effectively using GPT, requiring detailed and explicit instructions to generate valuable outputs.

GPT excels at solving puzzles given all the necessary pieces, showing its strength in reasoning and generating text based on specific inputs.

Applications embedding AI with well-defined prompts can significantly speed up tasks like creating user stories or test scripts, turning hours of manual work into minutes.

The importance of not abdicating responsibility to AI is emphasized, as users need to be able to validate GPT's outputs to avoid hallucinations and inaccuracies.

AI's current stage is akin to the flip phone era, suggesting that we are only at the beginning of its capabilities and impacts.

Effective use of AI requires better prompts, which is a new form of interaction that needs to be learned and optimized over time.

AI punishes mediocrity by quickly exposing poor data or analysis, emphasizing the need for high-quality inputs.

Documentation quality and the architecture of systems become increasingly important as AI relies on these elements to generate accurate outputs.

AI-driven changes in roles are about acquiring new skills rather than creating entirely new job titles, such as prompt engineers.

The analogy of AI as Google on steroids highlights its potential to vastly outperform existing tools when provided with well-crafted queries.

The concept of 'prompt drift' suggests that the same prompt can yield different results over time, necessitating ongoing monitoring and adjustment.

The Center of Excellence (COE) plays a crucial role in guiding AI adoption, emphasizing innovation, collaboration, and the strategic implementation of AI across teams.

Transcripts

play00:12

ai's been around what 15 16 years I

play00:15

think what's happened in the last year

play00:18

now uh GPT has suddenly and open Ai and

play00:22

chat GPT has suddenly brought everything

play00:24

to the Forefront GPT it got to 100

play00:26

million users in two months and why was

play00:29

that well first of all I think a it was

play00:31

easy you went and you type something but

play00:33

B the results were staggering you could

play00:36

type in and go give me the lyrics for a

play00:38

country song about this and it would

play00:39

actually go and generate that and GPT is

play00:42

just very good at working out what the

play00:43

next best word is um so I spend quite a

play00:46

lot of time on this on the confidence

play00:48

circuit at the moment and the first

play00:49

thing is that people think of GPT the

play00:52

potential of GPT Through The Eyes of

play00:54

what they can do with chat GPT over the

play00:56

last like six nine 12 months um

play01:00

and the analogy I like to use is the car

play01:02

analogy there so everyone out here can

play01:04

probably Drive there's a there's a

play01:06

hondre cord fantastic car um and that's

play01:09

what it's like not using GPT at all um

play01:12

some of us have been lucky enough to

play01:13

drive like that Porsche GT3 on a on a

play01:16

track and you're staggered by the levels

play01:19

of grip the acceleration you just can't

play01:21

believe actually how fast a car will go

play01:23

around a corner and you think okay I've

play01:25

now I now understand what the world of a

play01:27

Formula One driver is like and that's

play01:29

sort of where we all are we think we

play01:31

chat GPT we've seen what the future

play01:33

looks like and then you you walk off the

play01:35

pits and you bump into Lewis Hamilton

play01:37

the Formula One drive and he goes no no

play01:39

no you've you just don't understand what

play01:40

the Futures look like on the right hand

play01:42

side there's a picture there of the

play01:44

steering wheel of a Formula One car

play01:46

somewhere on there is a button called

play01:48

the launch control how you actually get

play01:50

it off the uh from from the the pit from

play01:52

the pits or from the uh from The

play01:54

Starting Line none of us could even get

play01:56

the car off the track without stalling

play01:58

and I think that's really where we are

play02:00

with GPT we're at the very very early

play02:02

stages and I think the first danger is

play02:05

that if people think about the potential

play02:07

of GPT Through The Eyes or or through

play02:10

their experience of what chat GPT can

play02:12

can do the other analogy is we're still

play02:14

at the flip FL flip phone stage we're

play02:17

nowhere near where an iPhone now is so

play02:20

we are really at the very very early

play02:22

stages it's not quite the wild west but

play02:25

actually it's very unstructured I think

play02:27

people are still finding out what use

play02:29

cases work what don't work and we need

play02:31

to try and cut through some of that and

play02:32

work work through to a go what what as

play02:34

leaders should we be worried about and

play02:36

how can we support our teams in terms of

play02:40

what they're currently doing with

play02:43

GPT so I think when when we think about

play02:47

chat GPT at the moment we tend to think

play02:48

it as Google on steroids we'll ask it a

play02:50

question it comes back with the answers

play02:53

and the better the question the better

play02:55

we ask the question the more complete

play02:57

the question we ask the better the

play02:59

answers if you just said to your intern

play03:02

who's bright and excited book me a

play03:03

restaurant they come back and they book

play03:05

your a restaurant the uh they booked you

play03:07

into the Italian restaurant for next

play03:09

Tuesday they go no no no no no not not I

play03:11

don't and and I really wanted a

play03:13

restaurant for next Thursday and we seem

play03:15

to have that conversation back and forth

play03:17

with GPT if you said to your intern I'm

play03:19

entertaining A clients they like Indian

play03:22

food and they like Chinese food uh we're

play03:24

driving and I'd like a table in a

play03:26

private room for four that's that's a

play03:29

good prompts suddenly the internet can

play03:31

come back and go right if I know that

play03:33

you're driving so I need we need parking

play03:35

space I know when I and I think we need

play03:37

to get to a point where we're asking

play03:38

better prompts so that's the first thing

play03:41

uh prompts need to be longer be need to

play03:43

be lot more explicit to get some of the

play03:44

benefits out of it and that is a new

play03:46

skill call it prompt engineering but

play03:48

there's actually more to prompt

play03:49

engineering than that and I'll explain

play03:51

that in a moment but the other thing

play03:53

that GPT is very good at it's very good

play03:55

at solving puzzles if you give it all

play03:57

the puzzle pieces now a slight challenge

play04:00

is that the limitation on the amount of

play04:02

information you can give it uh but you

play04:04

could give it I know a contract and then

play04:08

say Okay I want to terminate this

play04:10

contract against these four bullet

play04:12

points and it will write a reasonably

play04:14

good termination letter if you give it

play04:16

all so rather than trying to get it to

play04:19

use its knowledge of the world and

play04:21

knowledge of History instead think about

play04:23

it actually is using it its ability to

play04:26

reason and then come back with text and

play04:28

it's really really good at doing that so

play04:31

think more about how you give it the

play04:32

puzzle pieces than ask it to solve the

play04:34

puzzle and that's actually where I think

play04:36

we're getting into bit more of prompt

play04:38

engineering or certainly where you're

play04:40

seeing applications embedding AI into

play04:43

them where they've already thought about

play04:46

I'm asking GPT to do something in the

play04:48

context of my application it knows where

play04:50

to get the puzzle pieces it's now simply

play04:52

using GPT as the back end to solve those

play04:54

puzzle pieces and I mean an easy example

play04:57

is is our world elements. cloud if you

play05:00

map out a business process using the uh

play05:02

Salesforce UPN standard Universal

play05:05

process mapping

play05:07

notation AI rgpt application will write

play05:10

really really good user stories user

play05:13

stor with acceptance criteria and also

play05:16

test scripts also because we know what's

play05:19

in your org because we pulled all the

play05:21

metadata AI can then look at the

play05:23

acceptance criteria and then work out

play05:25

which metadata you could reuse or which

play05:28

metadata needs to be you so again

play05:31

solving what would take eight hours

play05:33

manually could be done in five minutes

play05:36

um but before we start getting worried

play05:38

about losing jobs or actually the Coe

play05:40

lead is going great well we don't need

play05:41

anyone in the team we need we need to

play05:43

think about what's actually happening

play05:45

here first of all we need to delegate we

play05:48

can't

play05:49

abdicate we can't ask GPT to do anything

play05:52

we can't do ourselves because we then

play05:54

can't validate the answers I think

play05:56

everyone's heard of the term

play05:57

hallucination AI is very good it

play06:00

actually it's rather like the male 22y

play06:03

old intern uh and I I deliberately use

play06:05

the word male because they are never in

play06:07

doubt but not necessarily right it gives

play06:09

very convincing answers which may not

play06:11

necessarily be correct so you need to

play06:13

make sure you a delegate the work scope

play06:17

it correctly but secondly you need to

play06:19

validate whether the answers has come

play06:21

back with are correct and I think we're

play06:24

still in the early stages of

play06:26

understanding the use cases where it

play06:28

works really really well and there's

play06:30

less hallucination we're going to see

play06:32

new skills not new roles so we won't see

play06:35

the role of a prompt engineer the same

play06:36

ways we don't have in our organizations

play06:39

the job of Google search engineer but I

play06:42

do think everyone needs Google search

play06:43

skills the same way as we all need will

play06:45

need prompt engineering skills um but I

play06:48

think what AI is doing is it's

play06:51

highlighting that some of the things we

play06:53

know we ought to be doing and don't do

play06:55

very well it's highlighting the

play06:56

importance of those so message here is

play07:00

first of all I mean phobo at the top

play07:03

there fear of being obsolete I think

play07:04

that's a very valid fear for all of us

play07:06

we need to stay current we need to

play07:08

understand the implications of new

play07:10

technologies coming in and what it means

play07:12

for ourselves but also for our teams but

play07:15

don't think about hiring new ski uh new

play07:17

roles think about actually how the

play07:19

skills we need to generate inside our

play07:21

organization I think this is about

play07:22

Career Development which is where we

play07:24

started this and then think about how

play07:26

you delegate you can't just go gbt told

play07:30

gave me the answer we need to think

play07:31

about how we delegate the the question

play07:35

to make sure we get the right in a way

play07:37

that we get decent answers

play07:39

back so first of all let me just think

play07:41

very briefly about the sorts of skills

play07:44

we need to to to create the first is

play07:47

prompt engineering which is something we

play07:48

haven't had before so the idea of

play07:50

writing a prompt and again back to my

play07:53

analogy of the Porsche versus a Formula

play07:55

1 car a prompt isn't just like a Google

play07:58

search with a little bit more there's

play08:00

actually quite a lot more involved in

play08:02

this first of all we need to a prompt

play08:04

can actually be quite detailed it could

play08:06

be almost like an email

play08:09

template inside our own organization

play08:11

we're using templates for prompts where

play08:14

we're inserting say three or four bullet

play08:15

points in different places and that

play08:17

prompt is then writing a website for us

play08:20

our internal website for all our new

play08:23

features and fun functionality so

play08:25

there's a website aimed at our customer

play08:27

success marketing and sales team so they

play08:29

get early sight of functionality that

play08:31

website is built by GPT using some

play08:34

templated prompts which are product

play08:37

management team of have optimized and

play08:39

revised over time so that's the first

play08:41

thing a prompt isn't just what you type

play08:43

in you could actually have a templates

play08:45

which you reuse so that's the first

play08:47

thing I think the second thing which is

play08:49

interesting is that when you test that

play08:51

prompt the result you're going to get

play08:52

could be different when it's done today

play08:55

versus a week's time versus A month's

play08:57

time because that prompt is hitting a

play08:59

large language model which could be

play09:01

optimized being trained could be being

play09:03

refined so unlike where I know we write

play09:05

a flow and it executes the same every

play09:08

time with a prompt it could change over

play09:11

time so we need to start monitoring it

play09:13

for

play09:14

drift we also need to make sure that

play09:16

when our teams are using those the

play09:18

result of the prompt that we understand

play09:21

how much they have to modify it so we

play09:23

understand how good that prompt was if

play09:25

it generates say an email that we send

play09:27

out to a customer and then the service

play09:30

agent or um the um support agent has to

play09:34

make loads of changes to it maybe the

play09:36

prompt needs to be

play09:38

refined is that prompt even being used

play09:41

uh or is it being used with no changes

play09:43

whatsoever which would be quite

play09:45

concerning so again there was some

play09:46

there's some monitoring of prompts that

play09:48

we probably would never do in terms of

play09:50

code or um say a flow so declarative

play09:54

code and then the other thing we need to

play09:56

worry about dependencies so if that

play09:58

prompt is is using say metadata from our

play10:00

Salesforce system or a third party

play10:02

system like data Cloud if that metadata

play10:06

gets changed the prompt will still work

play10:09

so we need to understand when we're

play10:11

actually making changes to metadata does

play10:13

a prompt use it almost the same way as

play10:15

does an email template use it except

play10:17

there's a bit more at stake now if that

play10:19

prompt is using a large language model

play10:21

to make some changes or decisions based

play10:23

on the data coming out of that metadata

play10:25

so think of prompts as certainly code

play10:29

but actually um almost a riskier set of

play10:32

code than maybe um a flow or Apex so

play10:36

that's a new skill the other the other

play10:38

four items on there are things that we

play10:40

should or ordinarily be doing but maybe

play10:43

is it probably doesn't have the level of

play10:45

importance as it as we probably should

play10:48

allocate to it uh but AI punishes

play10:51

mediocrity if you've got poor data it

play10:54

will give you poor results really

play10:55

quickly if you haven't done very good

play10:57

business analysis and you so the way AI

play11:00

is reading those business process Maps

play11:03

it will it won't give you very good user

play11:05

stories and what we've discovered over

play11:07

the last couple of months of using El

play11:08

elements GPT is we looked at the user

play11:11

story go oh that's not very good that

play11:12

was GPT and then we went back and looked

play11:14

at the process map that wasn't a very

play11:16

good we actually didn't set it up very

play11:18

well how often are we actually giving

play11:21

poor user stories to our development

play11:22

teams because we haven't thought about

play11:24

the business analysis so it's suddenly

play11:27

becoming making business analysis way

play11:29

more important because now ai is reading

play11:31

that documentation we're creating it's

play11:33

not just a user and quite often when I'm

play11:36

presenting when I I talk about um AI

play11:39

reading either process documentation or

play11:42

even metadata descriptions is it going

play11:44

to be

play11:45

confused or disappointed no longer have

play11:49

we got a a a an individual or person

play11:52

there think understanding the Nuance

play11:54

understanding the assumptions

play11:55

understanding those weird acronyms that

play11:57

are specific to your company when AI is

play11:59

now reading your business analysis

play12:01

documentation it's reading it literally

play12:04

literally and the the results are based

play12:08

on us following some well-known

play12:09

standards so UPN for process mapping um

play12:14

say uh the user story has a set format

play12:18

erds entity relationship diagrams data

play12:20

flow diagrams all have some standard

play12:22

industry formats we need to be following

play12:24

those and as an organization we created

play12:26

something called um MDD metadata

play12:29

description definitions so if you're

play12:31

going to document your metadata what

play12:34

let's work out the best way of

play12:36

documenting so a I can read it because

play12:38

what we've discovered is with good

play12:39

descriptions it can come up with some

play12:40

really good recommendations with no

play12:42

descriptions it's okay but it's not

play12:44

nearly as good as when it's got good

play12:45

descriptions so the last the last bullet

play12:48

point on their documentation that

play12:50

suddenly become more important on the

play12:52

quality of the documentation

play12:54

architecture again always important but

play12:55

suddenly the architecture of our systems

play12:58

is is not just our internal Salesforce

play13:01

system and maybe the the uh the other

play13:03

internal systems that we it connects to

play13:07

but if we're now touching a foundational

play13:08

Model A large language model suddenly

play13:11

that that architecture got way more

play13:12

complicated Salesforce is putting in

play13:15

place the Einstein trust layer so that

play13:17

we've got some confidence that our data

play13:19

isn't being sent to a large language

play13:21

model but we're now relying on yet

play13:23

another moving part inside the whole of

play13:26

our ecos inside of inside our it stack

play13:29

plus of course we're using a large

play13:30

language model which is potentially

play13:32

outside or we have our own large

play13:35

language model which we bring inside so

play13:37

again there are a lot more moving parts

play13:39

and understanding the architecture of

play13:41

what we're building and and if we make

play13:43

changes to any of those and the

play13:44

interdependencies becomes really

play13:47

important these these these things we're

play13:49

talking about we should be doing anyway

play13:51

it's just AI is forcing us to do them

play13:55

we're realizing we haven't done a

play13:56

particularly good standard um yeah we've

play13:58

got all these fields but actually have

play14:00

we got decent data governance on the

play14:02

critical fields that are maybe in our

play14:04

dashboards that our executives are

play14:06

looking at have we even done that

play14:08

probably not AI is going to be looking

play14:10

at those same same Fields so I I also

play14:13

because I've been around Chang projects

play14:15

for 20 years I'm always looking for that

play14:17

Catalyst that reason for executives to

play14:19

go yeah we should fund this yes we will

play14:21

support you we will give you the

play14:22

sponsorship and I feel AI is that that

play14:26

point where we can go if we're going to

play14:28

benefit from AI

play14:29

we've got to get our act together in

play14:30

certain areas and this is the

play14:32

opportunity to do to get the buy from

play14:34

senior Executives to do the things we

play14:36

knew we wanted to do but we never got

play14:37

the time to do oh documentation we

play14:39

haven't got time for that now we have oh

play14:42

don't worry about architecture we'll fix

play14:44

that later you can't yeah um well don't

play14:47

worry about there's that classic cartoon

play14:49

which is you start coding and I'll go

play14:52

I'll go and ask the users what they need

play14:55

big issue is that people don't spend

play14:57

enough time understanding what they

play14:58

really should be building before they

play14:59

build it and hopefully will shorten the

play15:02

time it takes to help understand what

play15:04

should be built and then we can use this

play15:06

as the excuse to get projects funded

play15:08

correctly so let's turn our turn our

play15:10

thoughts to the center of excellence and

play15:12

what this means um you've heard probably

play15:14

enough a man being but let me just go

play15:16

and I've was looked at these these these

play15:18

13 pillars of a center of excellence so

play15:20

before everyone goes well hang on we

play15:22

haven't got 13 people we can't have 13

play15:23

pillars again these are a set of skills

play15:26

that you need inside your organization

play15:28

around a COA and we let me just run

play15:30

through them very quickly so Vision so

play15:33

who's driving the Strategic Vision the

play15:35

direction for Salesforce from both the

play15:37

business and the IT perspective and

play15:39

that's different from leadership

play15:40

leadership is the steering committee the

play15:42

key sponsors who again are then

play15:45

validating and setting that direction

play15:48

governance is about now the business

play15:50

case the investment uh risk management

play15:53

is about the overall control of the

play15:54

Strategic Direction which is different

play15:56

from change control which is a lot more

play15:58

tactical that's management of changes to

play16:01

all all aspects of the program whether

play16:04

that's code changes or changes to

play16:05

training material or changes to or org

play16:09

charts uh methodology okay this is your

play16:12

implementation methodology and that's

play16:14

how I mean covers like people process

play16:17

technology so think about it as the

play16:19

business analysis piece devops adoption

play16:22

monitoring so that how do you drive

play16:25

changes you don't Implement once but how

play16:27

do you drive changes around your that

play16:30

cycle uh standards is I know we just

play16:33

talked about documentation standards and

play16:34

business analysis but it's metadata

play16:37

naming it's coding standards testing

play16:39

standards uh standards for training um

play16:42

uh standards for change management so

play16:45

again setting some standards that make

play16:48

it easy so people aren't writing

play16:50

documents and recreating Things From

play16:51

First principles uh metadata management

play16:54

okay uh every Salesforce now is run on

play16:58

metadata if you don't have a good handle

play16:59

on that metadata management is as

play17:01

important as as managing the code um

play17:06

architecture clearly we're now building

play17:08

complex uh systems which are

play17:11

interdependent uh and we need to make

play17:13

sure we think about the technical

play17:14

architecture but also how it relates to

play17:16

the Integrated Systems and then security

play17:19

which leads me to security um security

play17:23

needs to be architected in it's it's way

play17:26

harder to try and actually do it after

play17:28

the fact and we need to think about

play17:29

security and performance and

play17:30

architecture hand in hand um change

play17:34

management okay this is the people

play17:36

change management as opposed to the code

play17:38

change management which I think comes

play17:40

under change control so this is about

play17:42

how do you heart change the hearts and

play17:43

Minds how do you think about

play17:45

organizational structure how do you make

play17:47

sure the training is there to to get to

play17:50

put the skills in place not just

play17:51

Salesforce skills but I know the domain

play17:53

skills the business skills and and

play17:55

obviously know think about AI skills

play17:59

uh to a pmo so this is your project

play18:02

program management office or project

play18:04

management office which is managing the

play18:06

overall program making sure that we're

play18:08

delivering against the targets um earn

play18:12

that earn value um tooling okay what are

play18:16

the platforms for tools that we use not

play18:17

everything that you need comes in the

play18:20

box when you can when you when you buy

play18:22

Salesforce you need business analysis

play18:24

tools data quality uh devops um backup

play18:28

store so there's a set of tooling that

play18:30

you need to run Salesforce um and they

play18:33

either need to be built or you need to

play18:35

be buying them from third parties um and

play18:38

the last one on the list is innovation

play18:40

so how do you Foster

play18:42

Innovation that and spot it that's in

play18:45

certain areas of your organization and

play18:47

org and how do you then build that up

play18:50

operationalize it and then make reapply

play18:52

it across other orgs if you've got a

play18:54

multiple org implementation or just make

play18:56

sure that you're in get getting the

play18:59

collaboration to get that Innovation and

play19:01

make it

play19:03

operational that's a huge list I've gone

play19:05

through and many of you going fun we're

play19:08

we're a we're a relatively small or this

play19:11

feels like Overkill so what we've done

play19:13

is we've put together what we think are

play19:14

the most important items if you're a

play19:17

small and by small it doesn't

play19:19

necessarily mean not very many users you

play19:22

could be a small wealth management

play19:25

operation a VC a private Equity Firm not

play19:27

very many years users but actually the

play19:30

complexity means you essentially fall

play19:32

into the large bucket so it's sort of

play19:34

small low complex versus large High

play19:37

complex and we've looked at this and try

play19:39

to go okay which things do you have to

play19:41

have in

play19:42

place if you're a um if if you're at the

play19:44

lower level of complexity so some

play19:47

leadership metadata management

play19:48

architecture security and so on um

play19:51

obviously when you get to the top end

play19:53

and you're You' expect to have all of

play19:55

those all of those aspects in place

play19:57

which ones I think are impacted by AI

play20:01

directly and I the reason I I want to do

play20:04

this is some of those things where if

play20:06

you're small a smaller maybe that's

play20:09

suddenly because of AI they they're now

play20:12

they're now fitting into the mandatory

play20:13

box if we are starting to use AI to

play20:17

maybe write emails inside a uh inside a

play20:21

page layout for sales May suddenly that

play20:25

governance has become way more important

play20:27

than actually just the whole idea of

play20:28

actually we're not doing anything very

play20:30

complicated because we've only got five

play20:32

10 users or 50 users or 100 users fine

play20:35

we got some email templates but suddenly

play20:39

with AI Change Control governance the

play20:42

methodology about how we're going to

play20:43

control those prompts um metadata

play20:46

management all of those things have

play20:48

suddenly become way more important

play20:51

because a it's forcing us to put in

play20:54

those disciplines in place but secondly

play20:56

we've actually we we've added a whole

play20:58

set of extra risks into our org because

play21:01

we've got prompts hitting a a large

play21:03

language model which may which is

play21:05

outside our control if you let it get

play21:08

out of control it can kill reput

play21:10

reputation so quickly so I think one of

play21:13

the things on the bottom there

play21:14

Innovation is typically oh only when

play21:16

we're a large organization can we think

play21:19

about Innovation I think that Innovation

play21:21

piece is really important I mean we're

play21:22

not a huge organization St Cloud but we

play21:25

had an all hands call last you yesterday

play21:28

Thursday and the topic was Ai and what

play21:31

we wanted to do was not tell our

play21:33

employees how AI works or educate them

play21:36

but instead it was an open forum and

play21:38

said okay let's go around each of the

play21:40

different business units and talk about

play21:42

how you're using Ai and what's working

play21:45

what's not working so we actually used

play21:47

it as a collaborative forum and I think

play21:49

the the role of a Coe in terms of a AI

play21:52

first of all is acting as that

play21:54

Innovation Hub how do you get different

play21:56

teams to collaborate

play21:58

because even if you say our organization

play22:00

is not using AI you know that

play22:02

individuals are you know that people are

play22:04

playing in the margins they they're if

play22:06

they're not thinking about it they're

play22:07

about to think about it there are people

play22:09

outside looking using testing playing

play22:14

and the center of excellence is the

play22:16

perfect way to get those people together

play22:18

and find out who are the people who are

play22:21

passionate and out there doing things

play22:24

versus those people are sitting on the

play22:25

sidelines going maybe it won't work for

play22:27

me so I think in amongst all the hype we

play22:30

need to try and find what the sweet

play22:31

spots are because AI won't solve every

play22:34

problem for every person and I think I

play22:36

the way I'm seeing it is we need to find

play22:38

the intersection between is there a

play22:40

really strong return on investment can I

play22:42

get a 50x a 20x Improvement so with us

play22:46

we're seeing 100x Improvement building

play22:49

those user stories but the second one is

play22:52

can it actually be

play22:54

implemented if you've got to have

play22:57

gigabytes of data data that's all

play22:58

perfect then that's probably not

play23:00

achievable with our again back to our

play23:02

story if you can write a process

play23:05

map boxes and lines in the UPN standard

play23:08

you can immediately get that benefit so

play23:10

can we find a sweet spot where it can be

play23:14

easily adopted and then the I think the

play23:17

other piece is and we we tucked on it

play23:19

slightly earlier which is are the

play23:21

results consistent enough to be to be

play23:24

usable there's no point going oh 60% of

play23:27

the time it gets the right answer if you

play23:29

have to check every single line of every

play23:33

then probably not worth using and

play23:35

another a sweet spot there is code it's

play23:38

very good because it's hoovered up lots

play23:39

of code it's quite good at writing code

play23:42

it's actually quite good at looking at

play23:44

legal documents because it's got of

play23:46

legal documents that it's read and

play23:47

therefore it's quite good at

play23:48

interpreting them so there are certain

play23:51

places you can look to where it's very

play23:53

good I mean writing lyrics for songs

play23:55

because there's no right answer for a

play23:56

song again not relevant to us but for

play23:59

there are certain places where there is

play24:00

that sweet spot and that's where that's

play24:02

where I think open a and chat gbt is

play24:04

taken off but as as C leaders we need to

play24:08

start thinking about our Innovation Hub

play24:11

how do we start to find those sweet

play24:19

spots