spot_img
Home Blog Page 2

AI Project Management – Measure Phase

In the last section we talked about the Define phase in Lean Six Sigma. As the result we will have a set of documents which answer following questions:

  • Does the A.I. project will financially benefit my company
  • Will the A.I. Project make a certain process faster, better or cheaper?
  • What do customers, employees and my business will consider a successful A.I. project. Which are the clear needs of each of the three groups
  • Define Process Metrics / KPI for customers, employees and company
  • Defince Success Metrics for the A.I. project – What values exactly have to be reached to make it a success or failure.

As Lean Six Sigma is data and math oriented, we try to put our project or problem in an equation:

X1 + X2 + X3 +… Xn = Y.

Each X resembles a cause for the problem and Y is the Effect.

Imagine your customer satisfaction is low, which is problematic for you. And you want to use Machine Learning to help fixing it.

While the Define Phase sets the frame for solving a specific problem, the Measure and Analyse Phase try to answer the question: “What causes my problem?”.

In your equation “Customer dissatisfaction” would be your Y. The reasons for a low customer satisfaction on the other hand would be resembled by Xs. Reasons could be “phone system always breaks down, when customers call”, “headsets are of bad quality”, “Customer agents are un-trained”, “Sold products are of low quality”…In the end, this would look something like this:

Bad ServiceTec + Untrained Customeragents + Low-quality products = Customer Dissatisfaction.

That already looks like something, your A.I. outsourcing partner could start to understand, doesn`t it? Of course, in a later stage you need to know exactly how to measure each X and define it in a data collection plan, but this equation already condensed the complete complexity of your company’s processes into a nice equation, which you can easily show to external partners for easy and quick understanding.

But before we reach this lovely equation, you will have to walk down a thorny path. And that is covered within the Measure and Analyse Phase. 

The main purpose of the Measure Phase is to find all relevant causes for your perceived problem. For that reasons the Measure Phase is equipped with a set of tools and structured meetings which can be used to engage all relevant stakeholders in your company to “discuss and extract” potential relevant causes.

In the following section we will discuss a few of the major important tools of the Measure Phase.

Tools and methods of the Measure Phase

Yesterday we talked about that the measure phase helps to find and define causes for your perceived problem. Today we are going to look into a few tools and methods which are used to achieve your goal.

The first step in finding the reasons for your problem is having brainstorming sessions with all potentially relevant stakeholders involved in the problem. Sticking with our problem of high customer dissatisfaction, we would involve service desk employees, their team lead, their head of division but also customer facing employees like account managers, sales managers and also product managers and technical personal, because customers mostly complain about products which do not work. Do not keep the group of people in your brainstorming too small, because you want to include all customer-touch points. Each person interacting with customers and interacting with the product might hold data and information which we can later use.

In the brainstorming sessions, stakeholders together fill out a so-called fishbone-diagram. Fishbone diagrams break down problem sources into six pre-defined groups (like people, machines, methods, material) and talk potential causes in those groups or categories. This pre-definition helps to lead group discussions and help people to stay focused on one topic. The result of the brainstorming sessions will be a long list of potential causes for the problem.

After that all processes involving the named causes from the brainstorming sessions will be mapped. This process discovery is one of the most important overlooked steps in A.I. projects. It sets potential causes in relation to each other and displays how people cause perceived problems while interacting across departments, teams and time. After developing a set of more and more detailed process maps, which we will look at in later chapters, the process maps will be converted in so called value stream maps where each process step will need to show which data points will be created there and what value it creates. These process step parameters will be later used by your A.I. outsourcing partner to fuel their models. As you see, your initial business problem was first embedded in your business processes and the process steps now are converted to data producing units. If you are at this point, your outsourcing partner will be able to much easier bridge the gap between your business and its ability to create data models.

In a next step, you will again come together with your brainstorming brain trust and reduce the potential causes for your problem. Why would you do that? Well, imagine you came up with 100 reasons for bad customer experience. Fixing all of these problems would take a long time and cost a lot of money. However, you should always focus on the main reasons for your problems. This will always yield the highest ROI. The following example will make it clearer: Imagine 9 out of 10 customers complain about products being broken after having been shipped and only 3 customers complain about Jim in customer support. What do you do? Well, it is obvious: fire Jim 😉 Joke aside, of course you would look at a new packaging solution, which in this case does not involve any A.I. at all.

AI Project Management – Define Phase

The first step for each process optimization project is the Define Phase. This phase is mostly overlooked and bears the biggest issues in the future, if not handled properly. Here, the problem you want to solve is clearly defined and drilled down until it can be expressed in numbers.

The Define Phase can roughly be broken down into two parts.

In the first step you select and define your project in a structured way. Furthermore you identify all stakeholders involved. All relevant questions here will be broken down into three documents:

  • the business case
  • the project charta
  • and the voice of the customer

The second step will look at a rough process exploration. It will end up with a high level process map which roughly identifies the process which creates the relevant process output (e.g. customer dissatisfaction) you want to fix. In later stages we will use this high-level process map and create second and third level process maps, drilling deeper and deeper in detailed processes. The high-level process might describe the processes flowing through the whole company, while the 2nd level process map describes a process part within an department while the 3rd level process map describes work on a more microscopic level really defining what a person or machine is really doing.

Step one: Project selection, Project Definition and Stakeholder Identification:

Business Case
Each business case statement always follows the same pattern and reads as this example:

During FY 2005, the 1st Time Call Resolution Efficiency for New Customer Hardware Setup was 89% .  This represents a gap of 8% from the industry standard of 93% that amounts to US $2,000,000 of annualized cost impact.

Try it yourself. Imagine a project you want to work on.

During ___________________________________ , the ____________________             for ________________________  was _________________ . 

  (Period of time for baseline performance)    (Primary business measure)  (A key business process)          (Baseline performance)                                                       

This gap of ____________________________  from ___________________ represents ____________________ of cost impact.

 (Business objective target vs. baseline)         (Business objective)                                  (Cost impact of gap)         

Could you already fill in the blanks below? Did you gather all business relevant data to comfortably fill out the blanks above? If not, you are not ready to truly see the business side of you’re A.I. project. So keep going 

Project Charta

After your business-case has been formalized, the project charta is being developed. It is a set of documents which bridge normal project management documents (e.g. Stakeholder analysis) with data-focused documents which drill down on defining and quantifying the broad statements from the business case.

For example, if you want to do anomaly detection, the project charta demands a clear definition of the word “anomaly”. What do you expect to be an anomaly? 10% less customer satisfaction? Or 20% less customer satisfaction or any deviation plus or minus from the average or even mean?

The process of creating the project charta also demands to focus more and more on very specific issues of your business case and help you to narrow down the scope of your problems. This focus on a rather reduced, focused, data-oriented and well defined projects is the first step to a successful project. Also, your business case is stated in a scientific mathematical hypothesis – easy to be convertible into “A.I. language”.

Once the project charter has been developed, many of our clients are amazed, how many people in their company will be involved in the project.

The second document is the Voice of Customer.

This is also often overlooked and often we jump into new A.I. technology without asking us, if our customer actually benefit from it. So, this document looks into what your customer really wants and needs. Have you really understood your customer? Did you talk to them? Have your team gathered enough information to answer the question, what item, function or service is being perceived critical by your customer? Content of this document will later be needed to make a thorough root-cause analysis. If you do not understand your customer, your A.I. outsourcing partner will definitely not be able to fill this gap for you – unless the project is called “understand my customer better with A.I.”.

Step Two: Process exploration

As in the last section already described, this step tries to model the underlying process in which your problem occurs. From a process management perspective, each problem is considered a process-output and all causes of a problem are process inputs. Lining up many inputs, you have yourself a process map. Mapping your daily work into processes needs a bit of practice and expertise. Not everybody is naturally born with the patience and detail-oriented mind-set to do this from day one. But with a bit of practice and training, you can learn to view your company with a process oriented mind-set. However, if you are facing a complex process, I would suggest to get outside help to not fail your A.I. project early on.

In the following section we will focus on the Measure Phase and will see, how to find causes and root causes for your problems.

AI Project Management Framework – Overview

Process management is at the heart of each A.I. project and process optimization, a special entity within process management, offers the tools to bridge the gap from general project management to A.I. project management.

Understanding process optimization frameworks and being able to apply its tools and methods thus means to significantly improve the outcome of A.I. projects.

Process optimization can be viewed from two perspectives: output optimization and value optimization or trash reduction with in the process itself.

Output optimization has its beginning within the 80ies at Motorola and later at General Electrics and is called Six Sigma since the early 2000s. Process step optimization, or lean management, has its beginning with Ford’s Tin Lizzy and Ford’s learning from the assembly lines in the slaughter houses and was professionalized within the car industry and here especially within Japan.

These two optimization frameworks were combined with the foundation of Lean Six Sigma roughly 20 years ago.

Lean Six Sigma (LSS) has many advantages. On the one side, it has a structured body of knowledge, international standing and international certification. This means that there is a structured and testable approach to hire certified experts in this field or train your own employees. LSS has certain levels of expertise. I for example hold a black belt, but there are also entry level belts like yellow or green belt. And later on, people can also obtain Master Belts.

On the other side, it is the most data oriented management framework I know which puts it in the very near vicinity to A.I. projects as data is the fuel A.I. runs on.

A successful A.I.project, run with Lean Six Sigma, is conducted in five steps. Each phase will end with a distinct set of documents which will be handed down to the next phase. Actual coding or what most people consider “the actual A.I. project” just happens in the fourth phase.

The five steps are:

  • 1. Define
  • 2. Measure
  • 3. Analyse
  • 4. Improve
  • 5. Control

Successful discussions with technical outsourcing partners can only start once phase one, two and three have been finished within your company. Skipping Step one, two and three will mostly lead to costly re-work and extended project periods. If you don’t have the capacities within your company, get help from intermediary consultancy firms like Goldblum to prepare yourself for the talks with the outsourcing partners. It can not really be expected from a specialized A.I. company to be competent in process (optimization) management frameworks.

In the next section we will look at the Define phase.

Process Optimization

In the previous sections we looked at which frameworks are used in A.I. projects. As we have seen, in addition to methods such as PRINCE2 on the business side, SCRUM is used on the developer side. Indispensable here is the role of the ProductOwner-Plus who, as a link between PRINCE2 and SCRUM, must unite both worlds with company and industry knowledge.

Moreover, the use of process management methods is indispensable in order to adequately describe and define data generation and data flows and to convert all “worldly terms” into clear data terms.

However, it is often the case that before data science can be considered, the processes and data collection themselves actually need to be optimized first. The acerbic artificial intelligence saying “garbage in, garbage out” describes very well that the best data model will produce poor results if the data itself is mislabeled, outdated, stored incorrectly, or aggregated incorrectly.

Let me give you an example from everyday life. In the call center or IT service desk, they want to optimize the time between ticket creation and ticket completion with machine learning. As you capture the processes, you notice that employees don’t capture every call in a ticket or forget to close tickets when the case is resolved. Based on this, you can save any algorithm because the data doesn’t match reality.

And this is where process optimization comes in. This can of course be used for very complex processes such as chemical reactors or machines operating in the high-frequency range. There, special mathematical analysis methods are then used to optimize the processes in order to detect, record and, in the best case, eliminate the “data disturbances”.

Process optimization goes way back to Chicago’s slaughterhouses and Ford’s Tin Lizzy, which then became widespread in Japan in the 1980s and was reflected in the business sector as Lean Management. Optimization of processes but with a focus on comparing inputs and outputs emerged in the 80s at Motorola and found its way into the business world through its use at Generel Motors and has been known as Six Sigma since the early 2000s.

During the last 20 years the realization matured that the two elements Lean and Six Sigma together are an unbeatable weapon in process optimization and thus also in the successful development of A.I. projects and can form a good basis here especially in the fight against “bad data”. To those in the know, the combination of the two streams is known as LSS.

Due to the high effectiveness of LSS when used in A.I. projects, next week we will dedicate ourselves in its entirety to LeanSixSigma and get to know its 5 phases Define-Measure-Analysis-Improve-Control and see how we can map the five phases to the individual phases of the development of A.I. projects.

Process management

The short comparisons of the last sections between waterfall project management methods and SCRUM could theoretically be related to all IT outsourcing projects in general, but due to the proximity between software development projects and A.I. projects, it is important to look at this topic additionally.

Something that is very peculiar to A.I. projects, however, is the additional level of processes, because data is often generated, modified, deleted and stored in a chain of process steps. Moreover, these process steps are also often linked to business processes, where machines and humans generate data together.

Neither SCRUM nor “normal project management” pay special attention to processes in their frameworks, so there are generally no tools or methods available to adequately capture them. Anyone who has ever tried to capture living processes in a company in a formalized way knows that it is not easy. On the one hand, because it feels like every employee carries out the process differently and, on the other hand, people find it very difficult to express their work in formalized process terms. But since the cooperation of the employees is incredibly important in order to represent processes correctly, and to understand where which data is being collected (or not), the process manager needs special tools and skills to get the right information from the employee in all its fuzziness.

Furthermore, it is important to ensure that, in addition to a process definition that has been validated several times, each process step can also be expressed in numbers/data, because an A.I. project thrives on working with (good) data. This may sound banal, but quite often terms are not defined from the business side and then silently interpreted by the data scientist in the fourth link. This, as you can imagine, quickly leads to distortions when never questioned in later live operations these terms. Let me explain this in a little more detail. Imagine that you as a business owner have not defined the term customer satisfaction or have not defined how low customer satisfaction differs in numbers from high and medium customer satisfaction. Imagine that the Data Scientist, as a frugal person, assumes anything above 65% as high customer satisfaction, while in your heart you actually assume above 90%. In the later dashboard, everything would then be green, while it is actually on fire. A classic melon problem: green on the outside – red on the inside.

In the next section we will deal with a special but important form of process management – process optimization.

Silent heroes – Product Owners 2.0

Pure SCRUM makes “reportability upwards” much more difficult. What we see today in many successful companies is that a healthy mix and, above all, knowing respect of the two systems has developed. E.g., the Product Owner role in SCRUM is given much more responsibilities and authority to act as an interface to existing project management and company management methodologies.

This role of PO or PO+ should not be underestimated in any form, as this is really the true interface into the company. This means that Developers, Data Scientists and Scrum Masters may well follow standardized roles, but the Product Owner should have individualized company and industry knowledge. Let’s see what this might mean for an outsourcing project:

Let’s assume they want to hire an A.I. outsourcing company. Many AI outsourcing companies use pure SCRUM. This means that the roles of Data Scientists and Scrum Master are not critical roles for integrability into their company. What is important with the Scrum Master is that he masters his job and that the Data Scientists, for example, master their subject. But the Product Owner role is critical and they should spend a lot of time here analyzing the Product Owner there like in a job interview. Does he try to recognize or get to know your company structure? Does he have the technical and professional experience to quickly understand the needs of your industry (e.g. reporting needs, legal regulations) or company. Imagine your whole company is working strictly according to PRINCE2 and the outsourced Product Owner does not know PRINCE2 at all. Don’t you think that this will lead to problems? Don’t you think that the interface within the company will have to be continuously readjusted? Don’t you think that you will suddenly have to add another resource to this product owner, which in turn will increase your costs again and which is not always crowned with success, even in complex projects? One can write a lot more about this topic, which would go far beyond the scope of a LinkedIn bon mots. What I would like to share with you here in brief is that it is very important that you look at the frameworks in which your outsourcing partners do their regular work. Basically, it doesn’t matter what the rhythm is, it just has to fit harmoniously with your rhythm.

In the next section we will finally turn to process management.

The difference between waterfall and agile frameworks

Since the waterfall logic of the old project management methods was sacrificed to the necessary agility of software development, tensions still arise in many companies today about how to deal with the systems that feel diametrically different. Because even if IT departments often use SCRUM for software development and thereby achieve much better goals, because the developers finally have 2 weeks of air during a sprint to work in peace, the interface to decision makers, management, finance is often exposed to great friction. Let me explain this a bit more.

Everyone who has ever worked in classic project management knows the triangle of project management: time, scope and costs. For example, if the scope of a project increases, the costs also increase and it takes longer. Or if you want to complete a project in the same amount of time with fewer people, the scope of the project decreases. But if you want to implement a higher scope in the same time and with the same resources, the transcendent fourth dimension of the triangle – quality – suffers.

Many companies of today still rigidly follow the project triangle in daily life. In the fourth quarter, funds are allocated for projects of the following year (cost-fixing), planning for the live start of the project is determined (time-fixing), and the scope is only minimally subject to disposition. This leads to the fact that SCRUM, which requires a much higher agility in time and thus costs to keep scope, later often works flexibly on the fourth dimension of the project triangle – quality. I.e. there are results that are “in time, in budget and in-scope” and at the same time exposed to lower quality. And then SCRUM is questioned by many, since it “apparently brings nothing”.

On the other hand, we also see in companies that allow their teams pure SCRUM, this is also exposed to the risk that the departments can exploit this, because the developer teams of their own accord set the speed and they can use the sprint to simply work less. Quite often, I hear from decision makers, “Somehow, I lack control.” In what’s called Estimation Poker, the team determines how long something should take for each task over the next two weeks. Anyone who has sat in these Estimation Poker rounds can agree with me about how highly estimated things sometimes are here. After all, the higher the estimate, the more time a developer has to complete a task. And who wouldn’t like to have a more stress-free work schedule? I.e. SCRUM works because of the high personal responsibility of all participants only if this freedom is not exploited by the individual.

In the next section we will have a closer look at the important role of the product owner within SCRUM, since many outsourcing companies primarily work according to SCRUM and the product owner is the most important interface to the customer. Thursday we will turn to process management.

Comparing different project management frameworks

In this chapter we address the question of which frameworks are best suited for A.I. projects. You can look at A.I. projects from three perspectives:

  • First of all, they are projects, which means they could be handled with project management methodologies like PRINCE2 or PMI.
  • On the other hand, they have a very close proximity to software development projects, so one could tend to SCRUM.
  • Finally, A.I. projects very often represent metricized processes, so that a consideration with the help of process management methods like SixSigma or LeanSixSigma would be possible.

Why should this question be asked at all? What impact does the choice of management framework have on the success of the A.I. project? To do this, let’s go back in time a bit to the past:

The development of project management methodologies in modern times dates back to the 1960s when the Project Management Institute was founded and laid the foundation for structured project management with the PMBOK. Besides PMI, IMPA and PRINCE2 dominate the market as project management methods. Internationalization, growth and sustainable knowledge management had made the standardization of project management necessary. Success came very quickly and the PMI today has over 600,000 members. Project management methods generally answer the question of how projects should best be carried out.

However, with the advent of IT projects, it was seen that the waterfall project management methods that had been common until then were not consistently successful, as software development requires much more iteration than non-IT projects. As more and more IT projects failed or took exorbitantly longer to complete, agile project management methodologies such as SCRUM were introduced for software development. Scrum is a rugby term and describes a rather wild scrum from the outside, but which follows a certain logic from the inside, albeit a painful one for many participants.

In contrast to common methods, pure SCRUM only defines clear goals for so-called sprints, which usually last 1-2 weeks. After the two weeks, the results are shown to the so-called stakeholders. Central role in Scrum has the product owner, who acts here as a substitute for the project manager. The Scrum Master monitors during this time that the Scrum ethos is also adhered to. In SCRUM, the concept of the project was deliberately discarded and replaced by product, which puts the focus away from the process to the output. This has a great impact in reality and in project oriented companies.

In the next section we will see the difference between waterfall and agile project management methods. Later we will look at the role of the product owner.

Realizing Trustworthy AI – Non-technical Methods

In this chapter we are talking about the non-technical methods to realize the seven requirements of “Trustworthy AI”. In comparison to the last chapter, they all have a non-technical, regulatory and organizational character. As with the technical methods, this list is evolving over time and should not be seen as finished.

The methods described by the HLEG are:

  1. Regulation
  2. Codes of Conduct
  3. Standardisation
  4. Certification
  5. Accountability via governance frameworks
  6. Education and awareness to foster an ethical mind-set
  7. Stakeholder participation and social dialogue
  8. Diversity and inclusive design teams

Regulation vs Codes of Conduct

Regulations are external rules as Codes of Conduct are internal rules. Both will have to be revised in the light of artificial intelligence. As companies will have to wait until governmental regulations will be ready to be used, each company already has the ability to include the Ethical Guidelines of the EU in their “Codes of Conduct” to ensure that the company will follow it. “Codes of Conduct” are not binding for the company, but companies normally develop processes and internal rules out of these and thus reflect these codes within their living culture and can “force” employees to follow it by stating it in their working contract.

Standardisation

Following international standards like ISO is not mandatory for every company in every industry, but it shows customers a dedication to certain goals like good management or IT security. Many companies are already using existing standards and norms to apply them to AI. The Institute of Electrical and Electronical Engineers (IEEE) released the IEEE P7000 in 2016 – a Draft Model Process for Addressing Ethical Concerns During System Design. This already has an ethical focus, but still addresses all kind of systems and not just AI systems. In the future we will probably see standards just designed for Trustworthy AI.

Certification

In the old days, we could rely on a bachelor or master degree in a certain field, but today the universities do not seem to be able to keep up with the change in IT and AI. Thus extra-universal degrees and certifications seem to be better suited for showing expertise in a given field in IT or AI. Coursera, one of the world’s largest providers of online training for example offers the IBM AI Engineering Certificate or IASSC offers Lean Six Sigma certifications for process optimization. Over the time these certifications will probably standardize more and will follow the AI pipeline and AI life cycle.

Education and awareness to foster an ethical mind-set

Certifications are normally acquired outside of the company, while inhouse education reaches all employees in the company. Companies, which invest in inhouse education, will be able to ensure that awareness for ethical topics will be present in all employees. Change the system, change the minds.

Accountability via governance frameworks

It is a good practise to dedicate specific people in a company to a specific topic. Otherwise everyone thinks that “the other” is doing it and then in the end no one has done anything. Thus appointing a person or a team to ensure that internal frameworks are followed, certifications are obtained, standardization is aimed at and educational awareness courses have been attended, will yield the best results.

This concludes our discussion about the EU framework of “Trustworthy AI”.

Realizing Trustworthy AI – Technical Methods

As we have seen in the sections above, we have the four fundamental principles of Trustworthy AI ( respect of human autonomy, prevention of harm, fairness and explicability).

Out of these principles we derived the seven key requirements (human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity and fairness, societal and environmental wellbeing accountability).

The HLEG suggests both technical and non-technical methods to fulfill these requirements. In this chapter, we will look at a few technical methods. This list, as with everything in this document, is not complete and will develop over time.

Architectures for Trustworthy AI

An Architecture for Trustworthy AI could try to “translate” the above mentioned requirements into processes or procedures. These processes and procedures could be labeled ethically desirable or ethically non-desirable and with business process engines you could for example use process heat maps to determine to which extend your AI system is working in an ethical manner.

As AI systems are learning systems, we can try to implement guarding points along the learning process. In general a learning system (human, animal, artificial) uses senses or sensors to acquire data (or light, touch, smell, sound), then uses cognitive components to plan a certain behaviour based on the sensoric data and then act upon it. An architecture designed in favor of trustworthiness could try to have guard processes for sensing, planning and acting to ensure an overall ethically desired behavior. The importance here is to implement a solution on the concrete detail level.

Ethics and rule of law by design

Nowadays companies are already using “by-design” architectures when they develop privacy-by-design or security-by-design systems. The HLEG suggests to use this idea and extend it to the concept of Trustworthy AI. How could that look like? Well, imagine we want to fulfill the requirement of technical robustness and safety. Then we could build in for example fail-safe switches, if the system is compromised – like the fail-safe switch in hair dryers when they fall into the water.

Explanation methods

The field of Explainable AI or XAI is dedicated to answer the question of how to explain the sensing, planning and acting processes within AI systems. It is cute when an AI system takes a puppy for a bagel, but not so cute when a self-driving car ignores a driving truck, because it has a tarpaulin of an open sky. It is advisable for all AI developers to follow the results of XAI closely to learn from their findings.

Testing and validating

It is important to mention that software testing and AI testing is not the same. Many companies already try to develop AI systems with methods of software development, but that is not always leading to a successful end. The same applies to testing. Only because you have a test engineer in house, you cannot be sure that he or she is able to test AI systems. AI systems need to be tested continuously and in all stages. A sometimes fun task (not for the AI engineers) is to create enemy or red teams within your company, whose only purpose is to break the system. In this way you are not confronted with biased testers.

In the next chapter we are going to discuss non-technical methods to meet the requirements of Trustworthy AI.