January 28, 2020
| Season
2
,
,
Bonus
Episode
4

Implementing behavioral sciences in agile teams

With

Andre Morys

(

konversionsKRAFT

)

Implementing behavioral sciences in agile teams seems to be a struggle that many face. Learn why and what you can do to overcome this.
Recorded during
-
This is some text inside of a div block.
Audio only:

Episode guest

Episode host

Guido X Jansen

Guido has a background as Cognitive psychologist and worked as CRO specialist on both the agency and client-side. In 2020 Guido won the individual Experimentation Culture Award.
Guido X JansenGuido X Jansen

Shownotes

André founded his company konversionsKRAFT back in 1996 as a UX agency and he quickly specialised in user research and psychology. After improving the economical outcome for some big e-commerce websites early in the 2000s, he realised what a big impact the combination of customer centric and economical views has. Andre is author of 2 books (currently only in german language), university lecturer, blogger, initiator of the “Global Optimization Group” which is a network of outstanding CRO agencies, and keynote speaker on the topics of customer experience and growth. With nearly 100 people, his consulting company konversionsKRAFT is regarded as one of the biggest providers for growth consulting and services globally.

Book(s) recommended in this episode

Transcript

Please note that the transcript below is generated automatically and isn't checked on accuracy. As a result, the transcript might not reflect the exact words spoken by the people in the interview.

Andre Morys: [00:00:00] Yeah, thanks for having me. I'm glad to be here. I started my company back in 1996, which is quite a long time. So back then nobody talked about it. CRO back then we called it an internet agent. Let's see our web agency and we click quickly focused on user experience. And some methodology, we quickly realized that if you can guide your clients with methodology, they like it.

So we started with user research and we had some very early aha moments that what we do, potentially delivers a lot of money to our clients. And I think it was back in 2004 when. We started the systematic approach of optimizing a lead generation website. And that was the first client where we measured the impact of her work.

And it was something like 300 million euros a year. And we were fascinated by that with it. Wow. That makes a big difference. Yeah, but that was still the time of scaling businesses with Edwards. So nobody was interested in hearing our story. ed was actually a tough time, because we thought what we do is really clever.

But there was no demand on the market. I would say it took another 10 and years, until people really started to care about the economic outcome. Of their websites and user experiences. But my background is, to get back to your question is a user experience, view, and qualitative user research for you, which then led us to the economic outcomes and measuring them.

Guido X Jansen: [00:01:39] Yup. Yup. Yeah. And I think, yeah, 10, 10 years ago, and even until maybe five years ago for a lot of industries, it was just cheaper rides to just buy Edwards and throw more users at your website. And it's also very measurable. Of course. that's yeah.

Andre Morys: [00:01:56] Yeah. I remember some awkward pitch presentations where we presented our methods and it just like a secret door in the floor fund.

And we felt like we were dropped off. So somebody is right. What did they talk about? Next agency plays and they presented some cool flash intro with animations, and that was really creative and. Yeah, of course they went with that agency and we felt, or we had the impression that we did something wrong, which actually just took some time until, these topics, yeah.

Got the right perspective on.

Guido X Jansen: [00:02:35] yeah. Yeah. I think a lot of, companies now, still, when you bring up the topic of CRO it's, it feels like. most of them right now, I think, and see this as a sexy topic. especially when you throw around terms like eye tracking or MRI scanners or neuro marketing and whatever, but then when you actually do start doing the work, they get bored really quickly.

cause actually hard work, right?

Andre Morys: [00:02:58] Exactly. that's a big challenge that, when you do the job properly, it's most of the times it's more work. It's more effort to do it right then. Oh yeah, dude, just do it quick. Yeah.

Guido X Jansen: [00:03:12] Exactly. And that's the topic, what we are talking about today and, yeah.

whether talking to agencies or clients implementing, behavioral sciences in agile teams. Yeah. That seems to be a struggle for the, yeah. That many companies face. Why do you think that is?

Andre Morys: [00:03:26] I think it's, normal that, organizations. as they get bigger when you scale organizations or if it's already a big traditional organization that usually focus on efficiency.

So there will be always somebody that is demanding a tool or, some silver bullets to do the job, or, you'll see, when you go around and companies that focus on agile work and adrenal teams, they have all these. Sticky notes and their offices and a burn down chart. And you see so many things that have an implicit focus on efficiency.

We are up, people think they do the right thing because they are busy. And I would say the bigger the organization. the more often you see that mistake and, this is why I think nobody. Actually once to do something that causes more effort because the goal of using, all these disciplines that you mentioned before, like behavioral economics, your marketing, and so on, it's more effort to create better hypotheses.

It's more effort to do proper user research, to understand customer's behavior and so on. It's always more effort. If you want to put that into a streamlined agile process for people that focus on burn down chart, it won't happen as long as the console ya say. But yeah, we do AB testing. We did a couple of experiments last month.

Some manager will say, yeah, but look, we do a B testing. We, of course we are innovative and we do experimentation and nobody is asking the question, but are you doing it the right way? Maybe it's not about the amount of, sprints you do the amount of releases, the amount of experiments, but maybe it's more about the quality you do.

and this is something that most bigger organizations, they don't cultivate it. They don't foster it. there is another focus on being more effective.

Guido X Jansen: [00:05:39] Yeah. And do you think there's an inherent issue with embedding yeah. Behavioral signs in agile teams, is that. doomed for right from the start or is there a way to make it successful?

Andre Morys: [00:05:51] It's difficult. no matter if it's an agile or a traditional one, I would say as soon as the organization is too big, it's hard to implement it, on a very. Core level the task is, to operationalize, everything you need to be done and, topics like, behavioral economics, Customer research or song.

and most of the times they are not operationalized inside these organizations, not inside the, they have their daily send up meetings and Crum and print reviews and retros and whatever, but there is no meeting that asks is that the right hypothesis to test? There is no, operationalized meeting or thing in the process that asks, how do we prioritize this?

There's most of the times that there people focusing on business impact and they're guessing, they're just guessing what will happen and they're not measuring the outcomes of their work. Maybe they're afraid that it will show them, Oh, there is no outcome, or it's not that positive. So this is why I think it's difficult, no matter what kind of organization, but I would say the agile organization.

Doesn't have any advantage compared to the traditional organization when it's about fostering effectiveness. So managers that look for the silver bullet, I think I'll know we implemented processes in our organization, so you're fine. And that completely missed the point that this kind of organization is still not focusing on effective effectiveness.

Also, it should. And it does maybe for small units for small companies, for startups, but not for big ones. And that's a point that I see a lot of times when I work for companies and when we analyze their backlogs and when we analyze it a B test results, we see what they're doing there and just scratching on the surface, but not moving forward.

Guido X Jansen: [00:08:03] so what do you think the most companies, should start with doing or executing?

Andre Morys: [00:08:10] I usually start with a very boring topic and that's the topic of prioritizing. So what we do when we, started working for a client, we look at the existing backdrops and the existing ideas and we try to match. these hypotheses or testing ideas, however you call it, we try to match it with customer behavior.

we started with a theory. If it's a good hypothesis, then it's able to change customer's behavior. So what do we know about customer behavior? What are motivators? What are de-motivator? So how can you fit your hypothesis to what we know? And what do you see usually is 90 to 95%. All their hypotheses.

They just start to disappear because everybody, right? Oh yeah, no, it's just actually, usually the motivation for doing that test is we didn't know exactly. Who has the right opinion or who's right. Or if we should do it this way or that way. So then I say, so you're not experimenting, you're guessing, right?

This is our two, two completely different things. so analyzed through the right view, most back blocks disappear. And at the end, it's only two of three kind of good ideas, but that you have to work on. And then, the people. Automatically start with, what should we do then instead, if these hypothesis are not that good.

So that's why we start with a backlog analyzing and prioritizing what's inside the existing backlog to realize it's not connected with customer behavior and therefore not effective. Most of the time

Guido X Jansen: [00:09:57] engineers, the emotive. Nicole's your mental boss from the hash and data. The highlights were Oak eye-tracking mouse tracking and scrambled nominees. Fossil left.  we compliment online interactive and

midostaurin emotion, analytics platform, brain beak, digital  on deck or . Hannah

often encounter companies where you need to completely redo their whole, like the KPIs, the way they measure things. I see often that's companies, that try well. Yeah, like you just said, they're AB testing, not realizing that AB testing is just the part of the validation, but you also still need to work.

Did you do to work before that? but also often that's companies, or at least e-commerce teams, they are looking at KPIs or they don't even work on KPIs that actually benefit the company. They are, they work with trickled down KPIs or very narrow KPIs that they might be optimizing for that.

but in actuality, they're not really adding any value to the company. Is that something you encounter often or.

Andre Morys: [00:11:14] Yes. A lot of times I would say, reducing the work to the KPI is like a doctor who's only treating the symptoms and not the cost of a D C. So the KPI is the symptom and the cause is the behavior of a customer and the trigger that you deliver to the customer.

If a hypothesis starts with, we re increased that and this and that KPI by doing that. I, yeah, I usually start with the first question. So how is, what do you change on the website connected to the KPI? So direct connection between what they want to change on website with the KPI usually is an indicator of a superficial hypothesis that is not based on customer behavior.

And that's one thing you can quickly implement. I recommend to use a hypothesis template so you can better prioritize because the hypothesis templates should force you to think about the customer behavior. The template should. have the part that you need to fill in. Why does it change the customer's behavior?

Because if it's not changing the behavior, your KPI won't change. And, that's something that you have to explain to people because they are world of templates and websites and technology and KPIs. That's the only world they know. That there is somewhere a customer and he's doing things and that this behavior is the cause for the, data, your measure.

So data is always a result of customer behavior. If you want to see it that way, you have to explain it to people. I think it's a biggest mistake. If the experimentation team or AB testing team in companies is connected to the analytics department. of course I understand the history for most companies, AB testing has something to do with data.

So they ask the data and analytics people to be responsible for that. But actually you're doing experimentation and you're experimenting with customer behavior. So if you can't explain your KPI with customer behavior, you only know half of the equation and you're always changing things superficially.

But this is something I can operationalize change the hypothesis template, make thing, people think about the behavior that is the cost for the effect your, yeah. You want to see and yeah. Things are changing to work.

Guido X Jansen: [00:13:51] Yeah, exactly. I think one of the issues is also that, in an agile environment, you all have companies that are working in agile.

They have two week cycles where they're ready to deliver something, but I experienced at least a CRO cycle. you can run an experiment, but it's not only about that. One experiment is about a collection of experiments building up that knowledge off of customers. Yeah. That usually doesn't happen within two weeks.

No. how do you think that fits in? How do you manage the expectation at the, on the client side?

Andre Morys: [00:14:22] I w I would start with the vision. That means everything that you, ship as a team will be tested. So basically you stay with the two weeks of frequency, but it's not that you're launching things directly or shipping them.

Everything will be tested through an experiment. So that would be the ideal state from my perspective. So there is no, So you eliminate the problem that you can choose between. Do I publish a direct and nobody will ever learn if it worked or not? Or should I put it in an experiment? Because if you give people that choice, they will always go for instant gratification and saying, Oh, ship it directly quickly.

We held that it works, confirmation bias, whatever. This is why it won't happen. if you'll let people decide, they always want to ship things directly. I would not give them the choice.

Guido X Jansen: [00:15:16] Yeah, exactly. And then, Basically requiring any change or even if it's just a technical thing, everything should be running to the experiment, team.

but then of course, I think a lot of companies run into the issue that they don't have the capacity to do.

Andre Morys: [00:15:30] there are a lot of. Companies on the market,

Guido X Jansen: [00:15:33] at least when they start. Exactly. Who would have such a company?

Andre Morys: [00:15:40] no, let me think, but you can scale it starting with a lot of services and companies.

it starts with the testing tools state they're starting to build on demand services. You can just press a button and somebody will develop the experiment for you. There are small specialized, Companies on the market that builds experiments for you. So you that's just, I always started with the most, over the biggest constraint and that's usually developer resources.

So if you can solve that thing, if you really want to solve it, you can solve it and just calculate the business case. So how many of your. Releases might have a negative effect or they have a positive seven. You don't know it no matter what, you're not learning from your releases and correct what that problem is.

And euros versus we hire a company that just makes an experiment out of it. In the easiest case and the best case you will build your own experimentation system and experimentation platform. and you will be able to ship everything as an experiment with no extra cost. You just have to start to do the equation.

Guido X Jansen: [00:16:50] The funny thing is that I've mentioned in the, in a podcast before that often when I, when I, you probably experienced a similar thing, when you've you comment a company, when you start requiring the development team to run experiments, most of them, they see it as an extra thing, right? On top of their already existing, probably very busy jobs and adding something to the backlog of the development team while actually.

After a while they, the development team starts to notice. Hey, but instead of it's not something yeah, extra. We actually, when we validate everything, we can do it in a way earlier phase. So instead of developing something for maybe three months, we can spend a week on running a proper experimental net.

See it feels, and it saves us three, two and a half months of development. Yes. And that saves, it saves a lot of time.

Andre Morys: [00:17:36] And that's what I also see as soon as, and that's what I mean when I call it operationalize it inside your organization. No matter if you change your hypothesis template, no matter if you change the way.

you ship your stuff or deploy it. it's actually something where the change in your organization will affect and change your culture and the way people act. it won't happen accidentally. You have to make it happen.

Guido X Jansen: [00:18:06] Exactly. Yeah. That's exactly what we say in the podcast often that it's not, zero is not something, some trick you just do for couple of months and then you're done.

it's a way of working. It's basically change management. What we're doing, We're changing the way companies think about their products and our websites and how they should be. She'll be making those better. And for you, how do we convince managers and decision makers? How do you approach that?

If you have a, maybe you're very willing e-commerce team, and then they hire you, but yeah, you still need to convince the manager or decision makers or stakeholders. And do the actual value and not just going for that first six month or 12 month periods working with you, but that day they keep doing this.

How do you do that? How do you approach that?

Andre Morys: [00:18:50] My advice is the CFO should always be your best friend. So as soon as you can calculate the value of the work and the value of the change process or the value of the investment and experimentation. and you will quickly prove that it's positive. So it's maybe not the best idea to convince the CEO or the CMO, because they want that their big projects and big ideas where they don't measure the outcome, will still win.

But the CFO he's maybe the number cruncher and, the guy that is more on your rational side to calculate the business case, he will. have the desire that everything the company spends money for has a positive ROI. And that's what experimentation and CRO, no matter how you call it, stand for right.

It stands for only make things that deliver outcome and have a positive ROI. And. And save tons of money by not doing the things that don't deliver our eye. So this is my advice. sit together with the CFO and calculate the whole thing from the beginning to the end.

Guido X Jansen: [00:20:04] Exactly. And the added benefit of that is that if you work together with a CFO or to BI team, it usually, yeah, as a CRO, a specialist or agency that's hired, or Shiro specialists internally, you're reporting numbers and hopefully things that are better on the website.

but of course it's. much better if the BI team or the CFO even can confirm those numbers, it accepts, or instead of just you as a specialist saying, okay, we have 30 million extra and then everyone just looking at you. Okay. Yeah. That's nice that you say it's 30 million extra, but we don't see those 30 million anywhere in our pockets.

Andre Morys: [00:20:42] Exactly. A big pitfall for a lot of optimizers when they claim to have X million euros uplift and then the CFO tells you, I don't see it on the end. You're doing excellent. Yeah. This is why I think, and maybe that's another area w that we should cover, and our talk, because I think there's a lot of.

A big lack of understanding, of statistic, especially statistics of our experimentation. So there are still people that want an exact value to be communicated. They, you can't talk about. confidence intervals, for example, with them or whatever. and this is a big problem, especially for those companies that only do a handful experiments a year, and they want to be a hundred percent sure that there is a positive ROI.

If you tell them, actually there is no a hundred percent that they are, so then we leave it out and then we don't do it. Nope. I don't want that. Yeah. So to, work with statistics and tell people about intervals and things like that, I think this is a lot of educational work that has to be done to enable.

Oh, yeah. Especially the management level in a company to not produce some bullshit results where you claim that you deliver 30 million bucks or whatever,

Guido X Jansen: [00:22:08] online dialogue.

just add a specialist.

At  of course. And ultimately Zia salmon Mecho that was hit in the old day lift. When you say, Oh, Hama social websites, sales funnels, and customer journeys. Mia info. Hi, now online dialogue, fentanyl. Besides, the statistics. how do you see, for example, what do you guys report on? that's, there's basically the issue with whole sphere.

I think that we as zero specialists have full control, usually over the numbers and the experiments, but over a year with running a lot of experiments, yeah, of course the numbers can vary to say the least. so what do you report on, what do you see as KPIs for the whole optimization program?

Not just one experiment

Andre Morys: [00:23:06] there's one answer that is a win for both sides. And I say the more experiments you do, the easier this is to calculate ERI. So if you only did one experiments, one experiment, and you want to be sure. what's the outcome, then it's hard. But if you do several experiments and it gets easier and easier, and as I said, we teach people what a confidence interval is.

We, we try to report our results in that interval. So we say, okay, With an, possibility of X percent. We made that. And with Y percent we made that a result. We can't say you exactly, but somewhere in between as it. And we learned that the more accurate our reporting is and fits to what really happens bottom line.

The better, people trust in the overall program. So the most loyal advocates and experimentation on our client's side are the ones that can actually see the effect bottom line

Guido X Jansen: [00:24:10] that doesn't of course holds true. when you're experimenting, the Google's involved gets a better when you run multiple experiments in a controlled environment, but you usually, aren't not a controlled environment.

Things change all the time. how do you prevent, your company or your managers or your clients to say, okay, yeah, we saw numbers go up, but we also did a great marketing campaign or a TV campaign, or for example, there are, of course, there might be clients where the actual trend of the KPIs, maybe the revenue or the conversion rate or whatever it is you want to look at, the trend is going down.

just because the Martin people have less interest in your product in general, or, a lot of competitors might be joining your market segments. so how do you handle that?

Andre Morys: [00:24:55] I would say, especially in equal murders, of course there is the big challenge of seasonality. So you have that highest sale seasons where it's all about marketing and sale campaigns with 70% off additional to whatever.

Of course, we try to encourage our clients to do retest. So we do the same experiment, maybe twice, if we are not sure how it's affected this kind of campaigns and still foremost, are of our clients. The sample size is too small, so we can automatically detect effects like that. So we are still dependent on the information that there are actually.

Campaigns running. And we try to have a sample that is as normal as possible. So we can claim that we avoided any influence, or we try to avoid as much influence as possible from campaigns like that. Yeah, but sure. if you are in a very competitive market and with the very. fast business model, you will have a lot of these, influences.

and the only answer is so far that you should retest things, in, during these kinds of campaigns or seasons and outside these, campaigns and seasons to know the real effect, but for these clients where we do it again, we also had the advantage that we then started to think about. experiments that we do, especially for these kinds of highest sales seasons, if it's fashion e-commerce for example, so your roadmap changes and you start to learn what works and what kind of time.

and again, your understanding of what are the factors that influence your overall bottom line ROI? You get more and more learning. So it's the hardest way to start. With, I want to know our, I have Mike's presentation program bottom line from the first experiment you can't do that. You have to do more and you have to learn.

And how is it affected by your campaigns? And so the more experiments you have, the more results you have, the more data, the more accurate, your RI. understanding will be. So that's my

Guido X Jansen: [00:27:09] comment. And that takes some time, to do that with your clients. And that's why it's really important to them to do the stakeholder is really important for,

Andre Morys: [00:27:18] for Shiraz.

Yeah. we developed a special workshop that we do with the management level of our client. We call it RI workshop and we collect all their objections. So what do you think, what influences the RI. How can we create our very own approach to calculate the ROI for you? So we know if we say there's no RI dump, that's not true.

If we say the result of experiment is a hundred percent DRA also dump the truth is somewhere in the middle. So let's find out what are the factors that we know, and we can cut away. So we end up in this kind of confidence interval, and we explain it to the people. We enable them, teach them the statistics.

Finally, when they got the reports from us, they know it is what we have spoken about in that workshop. They know, it's it's yeah. The perspective on what might be true or not.

Guido X Jansen: [00:28:13] Yeah. Yeah. And like we just spoke about ROI can be in terms, of course, ideally you want to have some uplifts in the conversion revenue is gonna also be in terms of saving a development time or not implementing stuff that are not working.

Yeah.

Andre Morys: [00:28:27] basically we started with the question, how do you get the organization to focus more on effectiveness and standard instead of efficiency? And I would say, yeah, prove them that it's worth it. These exercises, they sound really theoretically and based on statistics and so on. So while this is the tough part, but I would say it's a foundation, to prove this value.

And if you don't prove that value, nothing will change in an organization.

Guido X Jansen: [00:28:59] That's definitely actually a cell. Shiro packages to two clients. I think, I think it was great and also in his course on CXL saying you shouldn't be doing, anything based on the KPIs, if, nothing performance basis always going to backfire, almost always.

It's also your experience.

Andre Morys: [00:29:20] no, we sell packages and, I would say the way we sell it to our clients is. pretty much the way I just told you. So we start with calculating the business case and focus on effectiveness and telling, our clients that we need a kind of a proof of concept that focused on, on effectiveness and putting more resources on better user research on better hypotheses on better experiments is worth it.

And you will always find these. Kind of people that understand it and that will be your advocates to say, yeah, let's do that. Let's give it a try because the way we are doing it right now, can't be the right way. We are just busy and we don't measure the outcome of our work. So we feel good, but actually we see the numbers are going down and we can't change it.

Guido X Jansen: [00:30:15] yeah, everyone's feeling good. Except the CFO.

Andre Morys: [00:30:18] Exactly. Talk to the CFO has stolen your team.

Guido X Jansen: [00:30:25] Exactly. And that's a good thing. Another thing I wanted to talk to you about last month, I did a course on, on CXL on your sticky evaluation. first off, I want to thank you of, of course, for putting in all the time, to the, to put the course together.

so for our listeners, can you introduce the course.

Andre Morys: [00:30:43] Yeah, the course. first of all, I see a lot of interesting evaluation is just doing it is it's just based on. Gut feeling. All right. So most people that are doing it, they watch your website. They look at the website and they say, I don't like that.

And I think this is very dumb. you need an objective, give a framework so you can Yeah, but better analyze what are the leavers that you have? And the second thing, as I said, we do a lot of user research. Yeah. And I, see two problems. Either companies don't do user research or if they do use a rich search, they think everything that comes out of user research is valid and that's also not true.

So a lot of times your users don't tell you the truth. And this is why I came to the conclusion together, with the founder of CXL Institute that, we should have a course about touristic evaluation. That is first. Making this process more objective. and second that filled the gap. so how can I get the answers that my users and the user research won't tell me.

And I use the example of a big, expensive car. And I say, if you do a user research, Above that car. Nobody will. if you ask, why did you buy this car? Nobody will tell you are too to impress my neighbors. I have a deep complex about my self esteem, so nobody will tell you that, right? So these are, this is a lot about implicit goals and psychology and processes that are unaware and decision making.

And this is why you need. the Arista gave affiliation to fill that gap. I don't want to say that doing user research is wrong or a waste of time. I'm on the line. Amazing. the kind of persuasive power of your website is, the perfect add on you need. Yeah. But it only works if you have an understanding about your, your customers.

So that just, this is something I explained in the course. Yeah.

Guido X Jansen: [00:32:51] I think in March off the court, I think in the first course, you already, outline you have a table showing all kinds of different, research methods and they all have pros. They have cons some are more and more better than others.

And it helps to combine different sources, right? the more sources you have, the closer you get to the proof to the truth. And, of course I think there's a huge bias in our industry. I think, to look at data from Google analytics, cause everyone has it. that's use that. Of course then you know exactly, what people are doing, which I have no idea why that's, why combining different methods.

and especially when you're doing your stick evaluations is so valuable.

Andre Morys: [00:33:29] Exactly. I have that picture. I think it's also a slide. And then the course where I say the data is like the shadow, it's a projection of the user's behavior. if you have a light and a user that projects, an outline of the user on the wall.

It's not the user or it's not the behavior. It's just the result of the behavior. And, seeing the whole picture is much more than just the data. So this is why I say, if you have the data, if you do use as a research, N do you have a good understanding about your users behavior from. For example a year is to give a real evaluation.

Then you looked at the same thing from three different angles, and then your overall picture might be much more valid than if you just use one method.

Guido X Jansen: [00:34:17] Becca uninformed ends our beta test and happy Oak lost from the beginning of flickering. AFL yachties did come through. The key test was not to be included and imposed.

If test tells that no, Tyler, Sheila. A convert comes our beta testing software. The smart insert that'll hit about any flickering of  faith gifts. Now that our support fee a 24 seven chat, the health would go up and dive is the last minute. If it's 15 key carbon positive, you do this yourself. You have a knife and the fork and the clean and artsy and plus year during

And, and of course you also introduce your framework for doing this. Can you do short introduction of Lamont?

Andre Morys: [00:35:01] Yeah, the framework is, called the seven levels of conversion. And it's very old. It started, as I said, we started with user research like 20 years ago. And, the framework work was foreign.

When we presented the results of our user research to our clients, we were searching for what are the top. headlines are the top topics that have the arrows that are cure or the potentials we always see. So it was by aggregating the results of dozens, of these kinds of user research researchers.

And, it led us to, seven days from topics. So it starts with a relevance. So the most. Or the problem we see the most often is that, websites have a lack of relevance for the user. And that goes on with trusts. So people ask themselves can I trust this company that will ask, where can I click?

Which is the third level orientation, and so on. So you can. say it's a checklist of the seven in our checkmarks that people have to do before they are ready to buy something. Actually it's only six, six check marks because the seventh one is something that most of the time happens, after the purchase.

but basically it's like this. I would say inner dialogue or an explanation of like inner factors inside the customer's decision making process. that's the model. And since then, I think, early two thousands, we, we started working with the model. a B testing came later and building hypothesis.

And so on that came later. But right now the model is validated with kind of thousands of AB tests. And, we are still working with that. We get a lot of positive feedback. and the German market, a lot of companies are working with the model. And now that I published it on CXL, as a course, I also get a lot of positive feedback from CXL.

Yeah. So I'm happy that it's useful. It's simplifying.

Guido X Jansen: [00:37:08] Exactly. And stood the test of time. That's good for us.

Andre Morys: [00:37:11] Yeah. like every model is not the truth. It's just a way of simplifying the truth though. your work is going to be easier.

Guido X Jansen: [00:37:20] It was a, the saying again, the old moles are. Are bad, but some are more useful than others.

Andre Morys: [00:37:26] Exactly. Ours. I, and of course my friend, Chris scoured with his lift model, it's similar to that, but, the lift mob model has a plane and I see it. It's sticking on a lot of office walls, because for people it's very, easy to understand the plan. I said, damn, when we started our model before got a plane or something,

Guido X Jansen: [00:37:51] you can use the car,

Andre Morys: [00:37:52] maybe a car. Yeah. Yeah.

Exactly.

Guido X Jansen: [00:37:59] Yeah. Yeah. So we'll definitely link to a next one. Yeah. The model in the show notes and also a link to the course CXL of course. yeah. thanks for all, for sharing this all out with us. One final question for you, of course you, you don't need to be inspired to run AB tests because you do a proper research for that, but then, but in general, for you, for your work, where do you get your inspiration from?

Andre Morys: [00:38:24] same most of the time from psychological, models or points of view that help us to understand customer behavior better. some of the best. Hypothesis for optimization came out of workshops, where we teach our clients to use persuasive principles or to use methods, behavioral design methods, build personas, whatever, everything that helps to understand customer behavior, I would say is thinking always, and ended up with better hypothesis and better results.

Guido X Jansen: [00:39:02] Yeah. Actually meeting the clients. So the actual users can be very eye opening. often of course a clients might expect us to be the experts in whatever they are setting, which is often also not the case. they're the ones selling bicycles or clothing or cars, and we're probably not.

we can tell them how to get the best out of your customers and in the process run them. They're not necessarily the experts. In and out to sell bicycles or something.

Andre Morys: [00:39:31] And that's good because I would say it's even the other way around our clients, they know too much about their customers.

So not knowing that much is the foundation for seeing the user experience, the way their users see it. As a product owner, as a UX guy in the company, you can't see the user experience anymore as a customer sees it. So I think that's a big advantage. Some of our clients told me, Oh, you can't start with the analysis of our website.

you didn't get the briefing. And I said, yeah, thanks, God, we don't have it so we can better analysis.

Guido X Jansen: [00:40:08] Yeah. Yeah, it's not personal for us.

Andre Morys: [00:40:11] Exactly. Thanks Andre.

Guido X Jansen: [00:40:13] I'm glad you had the time to talk to us.

Andre Morys: [00:40:16] Great pleasure. Thanks for having me

Guido X Jansen: [00:40:19] conferences or, or, events that are, will be, we could be seeing you in the next

Andre Morys: [00:40:23] couple of months.

Oh, I don't know where you will be seeing me, but, there it's the obvious conferences that I'd like to recommend, go to. Conference where you see, people that are on the same job as you are. So we mentioned the CXL Institute and their course, and of course, conferences from Pat like the CXL, live conference or the elite camp and talent is always worth a visit.

I would say it's like a reunion each year to meet this great people.

Guido X Jansen: [00:40:53] Okay. thanks Andre. And, talk to you soon.

Andre Morys: [00:40:56] Thank you.

View complete transcript

Here, help yourself to a cup of CROppuccino

Join our mailinglist to find out when special episodes go live, what awesome industry events are coming up and to get exclusive offers from our partners.
You'll get an e-mail roughly once a month and of course, you can unsubscribe at any time
(but like coffee, this newsletter is addicting, so you probably won't want to...)

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.