Personalise and automate with 51ºÚÁϲ»´òìÈ Target
Join this session to learn the core concepts of automating and optimizing 51ºÚÁϲ»´òìÈ Target capabilities using Auto Target and Auto Personalizations.
Matthias Kolitsch Senior Multi Solution Trainer EMEA / 51ºÚÁϲ»´òìÈ
Please welcome to your screens, Mathias Kolich. Hi everyone, welcome to the final session today. Welcome for the last sprint. This skill builder is about personalizing automated with a 2B target. My name is Mathias, I’m a technical trainer with 51ºÚÁϲ»´òìÈ for around four years now in the train 51ºÚÁϲ»´òìÈ Analytics, a 2B target, audience manager, and also the new experience platform. And next 45 minutes, we talk about how we can personalize an automated with a 2B target. I like to call that session as well beyond the A-B testing, because a lot of people are thinking about it to be targeted as an A-B testing tool, but we definitely can do, but we can also do much more for you, or you can do much more for your customers with a 2B target. So about the agenda, I have a short introduction to the automation where we have a short look into the different features, and then we will jump into how to allocate, how to target, and our automated personalization. Let’s jump into the content. And the first thing is when to use what level of automation. Because if we talk about automation, we have it built up like kind of like the least automation to the highest approach of… Outlocate is the only automation option which is referring to testing. So outlocate is an option we can use for an A-B test. Instead of manually deferring the traffic between experiences, we will use outlocate to see faster who is the winner, and also to lose less revenue during the test, and have a guarantee that we have enough confidence to say, this result we’re looking at, this winner we’re looking at is actually the winner. Then we have auto target. With auto target, we have already a high amount of automation because the whole targeting part will be controlled via algorithm. But with auto target, we will still create different experiences manually. So we have, for example, experience A, and experience B, and experience C, and then experience D if we created four new different experiences. And then we just let 51ºÚÁϲ»´òìÈ decide what visitor should see which experience. While with automated personalization, and that is now the next level of automation, automated personalization in terms of targeting is exactly the same as auto target is. And with automated personalization, I let also now at Ruby decide which content combination each visitor should see. So instead of just let the targeting be done by 51ºÚÁϲ»´òìÈ, also the experience creation in terms of putting specific different offers together will also be done by the algorithm. And the auto target as well as automated personalization are built to optimize for various specific single goal metric. You can see it here again in terms of a visualization where we have the poor manual setup. This is our classic A-B test up to the highest part of automated intervention, which is our automated personalization. This is my last session with 51ºÚÁϲ»´òìÈ Analytics. We talked about a sensei, our artificial intelligence can do in the 51ºÚÁϲ»´òìÈ Analytics context. But what are features from 51ºÚÁϲ»´òìÈ Sensei and 51ºÚÁϲ»´òìÈ Target? The 51ºÚÁϲ»´òìÈ Sensei is included in auto allocate, recommendation, automated personalization, and auto target. So in all the features we work with algorithm, in all the features our target sensei is involved. But we have only 45 minutes. I will show you later in the solution why recommendation might scope, might ring the 45 minutes goal and focus today on auto allocate, automated personalization, and auto target. But I will at least give you a short little hint in the tool later on why recommendations are a bit out of scope. Let’s jump into auto allocate. How does auto allocate work? Well, we always have an audience which is addressed with a test. It can be 100% of our visitors or we already say a test should be seen only by a specific audience. And if we have auto allocate, normally with a manual test we have, for example, a 50-50 split. If we have 100% of visitors who come into our test and they have an experience A and an experience B, we will split them 50-50 and have to run the test until we have a result with a confidence of more than 95%. Without auto allocate, we are not splitting the traffic by ourselves. So we just send 100% of the people in the auto allocate. And then we have that small little multi-armed bandit, you can see there in the middle, who is separating that 100% in 20% exploration. So 20% of the people will still be training data, will still just see randomly the experiences like a 50-50 split, while 80% of our new visitors will see the best performing experience. Because we shift the traffic 80% of the new visitors to the best performing experience, that will mean we get our result faster. Because we move more traffic to the best performing one, that means we will get sooner a higher confidence. Also, in March 10, we would run a manual splitted A-B test and our test plan, our calculation before is telling us we have to run the test for four weeks to get a valid result. And after one week, I slowly discover, oh, my experience B actually is working much better. If I discovered that after one week, but I can’t stop the test yet, because I don’t have 95% confidence. So it means I still have to wait for the four weeks being over as it was calculated before. That means in the next three weeks, I will still equally shift the traffic to experience A and B, even if I already seen that experience B is working better. So that means I potentially will lose revenue during my test period. So it might be a cost which is worth it to get a valid result. But without the allocate, I don’t need to lose that revenue because the auto allocate will work different at the moment. The auto allocate is discovering that overall winner, the best performing experience for now, it will automatically shift the traffic. So less people or let’s say the most people will see the best performer. So that means automatically my revenue will be better. The only thing what I can’t have with auto allocate, I cannot compare the losers against each other. So only have if it shows me a winner, I have a guarantee of 95% confidence. So I can go for the winner. And I’m confident that I do the right thing. But I can’t compare the different losers against each other. For me personally, testing is about finding a winner. So I personally would claim that I would do if I responsible for testing the most of my activities without to allocate. And only in some circumstances, but I can’t see a use case where actually have to compare all the different losers against each other. So for that reason, for me, auto allocate might be the best possible one. The question you might now have is, okay, when does the auto allocate actually discovering or has enough data to actually start running? So the auto allocate start allocating traffic this if we have 1000 visitors and 50 conversion, so that is the minimum traffic I need to have that the auto allocate is running. If I create an activity here, and I click here on a B test, I can click here on my visual composer and I will just load a random website. So if you have the skew matrix page, this is the page I will change in to be target. So if I click here next, I will just have my two different experiences. I can just keep it quick and dirty and just swap the image. And I will come here to my targeting of my test. So the targeting of my test, I have here a manual split with 5050. That is what we normally have out of the box. And if I choose my auto allocate, I don’t have the option to change anything anymore. So that is all I need to do to have auto allocate turned on when I create my test. And this means from now on, I will have the possibility that 80% of the new visitors will see the best performing experience after the auto allocate has actually achieved 1000 visitors and 50 conversion my tests. So that is the moment when it actually starts running. Let’s go back to the slide deck. In terms of the reporting, there’s not too much to say about the reporting of the reporting of the auto allocate because at the moment, again, we have achieved 95% of confidence a winner will be shown. So that means I can stop the test and go for the winner. Then we have another experience we have an auto target. I also can jump here to my auto target activity to my auto target overview. And this is how auto target is working. So if you jump for a second back into the browser, you can see here that I have the auto allocate option chosen. But what you also see here is auto target. So now there’s a very important thing auto target itself doesn’t have to do anything with testing. Auto target is already an automation approach. And we only see auto target here because of the reason how we create experiences without the target. So if I use auto target later for my targeting, I will create here my different experiences in the manual way. So I really manually create the different experiences. So I can create four or five different experiences here. And then what I define in the target, if I set out my auto target, I say, instead of having a manual approach with my, instead of having a manual approach with who should see which experience I say here, I want to have my machine learning on. So I will switch here without the target to machine automation. And then I have here just more or less two different options. I have one option, which is a maximum of personalization.
So that means 10% of the traffic is going to my random experience. And 90% of my traffic will go into the automation. Why if I’m not sure at the beginning, does actually my automation work in a better way, I can make it 5050. So that means I have again, like an AP test, where I test does the automation work actually better than the normal way of the random way of showing people different experiences. So if you jump back to the slide deck, I just need one second, because I can see that my charger is not working. It seems that I just saw in the background that my battery was slowly dying, even if my cable was in, but now I’m sorted. So if you go back to the slide, you can see what is happening there in more detail. So we have still here in the slide 10%, which are in the randomized experience, as you can see in the to be targeted interface as well, this 10% of my control group, we’re just seeing random content without having a machine learning behind that. Then 90%. So this is now set up to maximize personalization. The 90% who are in my targeting will be split again, with the same multi-armed bandit or more or less the same multi-armed bandit as with the outdoor lookit. So now I have 90% who see the best experience and 10% training data, because the algorithm still has to learn. So I still need amount of training data. But I also already start with the exploitation for 90% of the people in my automation targeting were seeing the best experience. That is even not that cool, because it’s still our best experience overall. And that means it’s one size fits all. We want to go away from one size fits all to a one to one personalization. And that is not an interesting part, how to target and also automated personalization is delivering you is within the exploitation. Within that 90% who see the best experience, there’s again, a multi-armed bandit. And this multi-armed bandit is just checking if we have already enough information for this specific visitor to get own experience. So for example, if I come to your website and you have auto target on and auto target, and you have enough data for me, your 51ºÚÁϲ»´òìÈ target will show me the best experience for me for Mathias. While other people will see other experiences, but I get the best experiences working for me. But this auto targets the limited to the few experiences you have created for this specific activity. But I will get out of this experiences you have created the best working for myself as a person. If you have a new visitor, or if you have not enough data for me yet, so if I come to your website, one of my first times, then I will see at least the best experiences, best experience overall. So that’s mean I don’t get a personalization. I at least get our best experiences overall. So I will see what works best for the most of the people. But at the moment, we have enough data in the auto target, it will even go the next level. And here becomes then a bit tricky about data. I know a few of you might have huge amount of data. And you can do an anyway, everything what you want to do is to be tagged. But might also be that some of you already struggling with data for normal a B test. But if you have enough data for normal a B test, it might also be the case that it will not be enough data file to target because as you can see this auto target, we have mostly two different kinds of success metric, we can have a conversion as my success metric, why can have a revenue related success metric. If I use conversion, you can see here that I need 1000 visits, and at least 50 conversions per day per experience. And in total, the activity must have at least 7000 visits and 350 conversions. So you can see that’s a huge, huge amount of data, much more data than the use was out to allocate. And if you use already a revenue metric as a success metric, that means it might be easier to argue later in meetings, because you can directly show see if this auto target we have done that and that much revenue per visit more than we would do this out automation. But to have that you still need 1000 visits and at least 50 conversion per day per experience. But now additionally, you have to have 1000 conversions per experience. So Mr. conversion metric, you had 350 conversions in total, Mr. revenue metric, you need to have at least 1000 conversions per experience. So if you have five experience, you need 5000 conversion, that the algorithm actually can start running and have enough data to actually work properly. So before we jump into the automated personalization, let’s just have one look back into the interface. So you can see how to allocate is this in the A B test workflow, which is fine for us because the auto locate is at the end of an A B test is just a better way to get to my result. The auto target is also again in my A B test workflow, but only because we need that experience options at the beginning when we create a test for our auto target as well. So I will create here my experiences, I could add even more experiences over here. And then in my targeting, I just say okay 51ºÚÁϲ»´òìÈ, I gave you the experiences I want to have now 51ºÚÁϲ»´òìÈ target you will deliver this experiences to my visitors. And again, I showed you on the slides the 90 10 split. But if you think at the beginning, I’m not sure if the automation will work as expected, feel free to test it first with a 5050 split. So that means you will have a manual 5050 split and just within your 50% of data, which is going into the automation, you will then have the machine learning running to optimize for each visitor, the different experiences they will see. You can have a customer allocation, but this customer allocation have to be between 5050 or 90 10. So 90 10 is really the maxi miles personalization. But with the customized allocation, I can change everything within these two ranges. Let’s jump back to the slide deck. This automated personalization, again, I told you already the targeting itself, it works exactly the same way as with auto target. So we have again, that same split like this auto target and 10 90 10% will just see random experience and 90% will be in the machine learning. And as without a target that is coming to move young bandit was splitting 10% and training data, 90% were actually see the best experience. And also this automated personalization. At the moment, you have enough data from materials as a person, you will also show me the best experience for myself. The big difference was automated personalization is how we create the content. So if you please jump back into the browser for another time. And I will cancel that a B test because as you can see, the automated personalization is not part at all in my A B test workflow. If I click here on create activity, and this time I create an automated personalization, I can load the same website in again. And he is not a big difference between an auto target or yeah, especially against auto target because I don’t have here and add experiences anymore. I don’t have your modification panel or something. So my visual composer, how I create my experience looks now different. So what does that mean? I don’t add manual experiences anymore. What I will do is I just give the system content alternatives. So I can click for example, on the image. I can say I want to have for that hero banner and different images in my test. At the moment my image library is opening. I can just click here. Skiing is always good. Few mountains, a few different images. And I click on save. So it means now the location one, I have 12 different images. I can even say, okay, this article. Like here are a few different alternatives for the text offer. So instead of article, I can use a different wording like goods or products or stuff. Or my creativity will be at the end. You can of course also do more stuff like different colors, different designs and so on, but just keep it simple because we want to talk about the automation. So it means in location one, I give 12 alternative images. In location two, I give a few alternative text options. What I can do now, if I click here on preview, I can have a short look in all the different experiences I created. So you can see if I just change a few different things here, I will end up with a lot of different content combinations. I can click through all the different combinations. So it means I didn’t create four or five explicit experiences manually. I just throw in a lot of different content and a to be target built in a lot of combinations of that content. I have here a small little traffic estimator because you can already imagine if I have, we already need a lot of traffic for auto target, right? But if I now have not five experiences anymore, if I have no hundred experience, I might need much more traffic for my automation. So for that reason, we have here a small little traffic calculator where I can put in my typical conversion rate, how many visits I estimate per day and my test duration. So it means if I have a conversion rate of one to 5%, a hundred thousand visits per day, my test would be able to get me a result within 14 days. So I have a chance to actually get the algorithm running within 14 days. If I would have only, let’s say 50,000 visits per day, that would be good enough. Let’s put 5,000 as an example. I perhaps haven’t had too much experiences, but you can see if I exceed the, if I have too less traffic to get a valid result within or get the algorithm starting within 14 days, it will say I should give it 64 days to be ready. So if I only have 5,000 visits per day, I would need 64 days to be ready to get that algorithm running. Why should reduce the number to two? Well, thank God, I don’t think any one of you only have 5,000 visits per day. So it might be possible to get that algorithm running with even a few different combinations here. I can also manage my content. If I go here to manage my content, I see all my 48 combinations I can use. I can even then exclude very specific combinations over here if I want, but I get all the list of my combinations. I can even see here my single offers and I can even exclude specific offers from my running automated personalization. Because the thing is without the target automated personalization, these activities are not really built for running only a specific amount of time. They are built to turn them on and let them run. And then we will just throw in new content or removing content and then again, we’ll need time for the algorithm to learn the new content.
But very important message if you use automated personalization or auto target, you very unlikely will have the reason for running normal tests on the specific pages.
And if I have my combination built and I go to the next step, I come to my targeting. And here is my targeting. I have only one option and this option is the same. I have in my auto target. So I have again the possibility to first have a 50 50 test if I’m actually learning is working. If I’m not 100% trusting that option yet, or I can directly maximize my personalization traffic. Everything about goals and settings is the same. So it doesn’t matter if you use a test, auto target, auto allocate, automated personalization, we have more or less the same setup in goals and settings. So we are not going in too much detail about that. There’s only one important consideration, especially for automated personalization. And that means the automated personalization is the only activity we still have, which cannot use 51ºÚÁϲ»´òìÈ Analytics as the reporting source. So only for the automated personalization, you always have to use a metric from your 51ºÚÁϲ»´òìÈ target reporting. So you can see here, target is the only supported reporting source. It was in the past the same case for auto allocating auto target, but it’s not the case anymore. For auto allocation, auto target, you can use 51ºÚÁϲ»´òìÈ Analytics as the reporting source only for the automated personalization is still limited to 51ºÚÁϲ»´òìÈ target. Let’s jump back to the slide techniques. With the automated personalization, we have very specific reportings. We have an activity level report and we have an offer level report. So two levels of reporting, this activity level reporting that only helps me to see one thing, does the automation work better than the manual approach? So does the automation work is something I will see in the activity level reporting. And then in the offer level reporting, I can go down in the way that I click on my personalization activity level report. And if I click there, I will see the offer level report. And this offer level report, I will get all my different offers shown. And very important here is that the offers will not be compared with each other. It’s like in a normal test, the offers will be compared with the control group. So you will see for each single offer, how much uplift do we have compared to the control group, which with confidence. And also here the confidence is relevant. So we should have a high enough confidence if we make decision like removing an offer, for example, from an automated personalization, which is a possibility. And here again, the hint that we don’t have analytics for target for our automated personalization. Then we have two reportings, which we also have for auto target. And that is automated segments and important attributes. So while we have the possibility in our auto target reporting to see which experience is performing best, and I can see in my offer reporting for the automated personalization, even in more detail, what specific offer are working best to make the right decision to modify the content for the future. I also have for both of the activities, two reportings where I can look behind what the targeting algorithm is doing. Because remember, with the targeting set up, the only decision we can do is, do we want to have a 50-50 split? Or do we want to maximize our personalization? And at the moment we turn it on, there is using all the data we have in 51ºÚÁϲ»´òìÈ Target. All the data which is coming from our 51ºÚÁϲ»´òìÈ Target implementation, all the audiences shared from audience manager, shared from 51ºÚÁϲ»´òìÈ Analytics, customer attributes, all profile attributes we are loading in via API, for example, all profile attributes which are coming from the website. So all the data we have in our ecosystem, in the experience cloud, will be used in 51ºÚÁϲ»´òìÈ Target for this algorithm. So it’s not just the data which is coming in, all the data which is shared with 51ºÚÁϲ»´òìÈ Target. And to see a bit behind that algorithm, we have two reportings, we have automated segments where you can see, okay, what is the algorithm built together to address them with the same experience. So that is very interesting, especially if you consider that each visitor with enough data get shown their own offer combination. So now I can have automated segments and I can find out, okay, which of these visitors are sharing attributes, which of the visitors are considered as an audience for the automation. And then we have important attributes that goes one level deeper there we can see what dimensions, what metrics, what information from the experience cloud, what data is considered as important as relevant data for my targeting. So there we have relevant scoring where we can see what attributes are mostly used for our targeting. There’s just one consideration, especially about these two reports. You will only have access for these two reports if your activity is running more than 15 days. So we will not have that reporting directly from the beginning because of course it takes time for the algorithm to learn in. We already saw that before. And the personalization when we had a look in our traffic estimator, it was always by default set to 14 days. So that shows us do we have enough traffic to get the algorithms running in 14 days. And that also means well for personalization insight reporting, we consider that it mainly take at least 14 days for the algorithm to actually successful run if you aligned to the traffic estimator user while you build the automated personalization.
One of the last slides for today is making it actionable. So when an automated personalization is running, that also counts a bit for auto target. So if one of the automation activities is running and already mentioned before, they are built for not just running temporarily. So if I activate an auto target automated personalization on my website or on a specific area on my website, I will just let it run. I don’t need a B testing or something like that anymore because auto target automated personalization are anyway optimizing for each visitor what they can see. So how do we make it actionable? But if the automated personalization is doing great, we can lean back and open the champagne your bottle this our management. We let it run as it is. We will see in the uplift that we make more revenue and that’s fantastic. But after a while, we might want to start to optimizing and the automated personalization or auto target a little bit and try by ourselves to add more of us to remove offers. And that is something that the offer level report thing, for example, helps us because that is designed to see what offers are working especially good. What offers don’t work with our automated personalization? And then you can add and remove offers. What a very important thing is you need to load the algorithm time to wheeler. So it means if I throw a few new offers in, the algorithm might need some time that this new offers actually shown to people because it has to learn with that few percent of training data, it has to learn that new offers. Then we have if it both keep pace, automated personalization control keeps pace. So that means my automation is not working better than my default website. That’s a problem. That is not how auto target automated personalization designed for. So I should check my sample size guidelines. I might have to wait a couple of days. Remember we need enough data until the algorithm start running. We might need to consider the amount of traffic which is going to render. So if you have a 50 50 split and we see it keeps paid, it doesn’t kick off. We might need to change it to 90 10. We also want to check that our offers are different enough. Our locations are working. Our audience separation is good enough. And then hopefully we will find out and our automated personalization is doing great afterwards. Even worse if it’s underperforming, I have to do exactly the same steps as it keeps pace because it’s even worse. But I also should contact directly line consulting because it’s really not supposed to be like that. And I have to consider to deactivate the test because we lose revenue. Very important is auto target and automated personalization. My recommendation for you is it only really makes sense to run them if you have your main business goal as a success metric. Because these two activities optimizing your website automatically and you want to optimize your website, not after our secondary success goal or something like that. You really, really want to optimize your website for your main business goal. It also means if it’s underperforming and keeping pace, that means it’s not good for your main business goal. That means in the worst case, we will use a lot of money. So we really, really have to consider deactivating the test.
With the introduction automation, we went into auto allocate, auto target and automated personalization. We at least have brief looks in the interface, but in the 45 minutes, unfortunately, we can’t really do activity end to end because then we would only do one of them. So I hope it was a good overview for you. And if you have any questions, please let us know. Use the chat, put all your questions. There’s one question from Nina. We got a recommendation to run simple AB tests first to select strong experiences before adding them to AI activities. Would you say this doesn’t make sense for auto allocate? Well, we have to be careful here because auto allocate is nothing else like an AB test. So if we have, so you don’t have to run a manual AB test before you do auto allocate because auto allocate is a test itself. But if you talk about auto target and let’s say you have an idea of five different experiences, you want to have for your auto target activity, then it can make sense that you run first a manual test to find out what are the best experiences to actually start your auto target with. And then later on, you might have any situation that you want to add a new experience additionally to the ones you have. Does that answer the question? Waiting for Nina’s feedback. In the meantime, Daniel asks, because he might have missed it, but did you say why the recommendation engine recommendations was out of scope for this session? Oh, sorry. Yes, I can. If you can share my desk for my browser for a second. Is that still possible? Perfect. So why is recommendation out of the scope? We have here all our activities from AB test to automated personalization. And we have some other activities like experience targeting multivariate tests, which are still more manual approaches. And we have here our recommendations. If we want to start with recommendation, and I would never say don’t do recommendation because if we look at the big players like Amazon, Spotify, Netflix, and YouTube, they’re mostly doing recommendation. If you use open your Amazon app now, the most things on your first screen will be about recommendation. But if you take a look here on the top, the recommendations is the only activity to be target. We’re the only activity which has our own setup navigation. So if I click here on recommendations, if you start talking about recommendations, we have to talk about product catalogs, we have to talk about different criterias, where we have a lot of different algorithms, for example, people who view this are interested in that people who bought that are interested in that, then we have to decide how the recommendation should look like, and so on. So there’s really a lot more behind recommendation. I would recommend you to do a training about recommendation, we offer more topics about that, but it’s just something we can’t really do in 45 minutes. All right. No additional questions. I might have one last link for you. Just interrupt me if another question is coming up. Unfortunately, I lost the link if you want second otherwise I will share. I will share the link later on. I have a link for you with limitations for to be target. And for example, for the automated personalization, the limitation of different content combination is 30,000 per activity. So it can really go over the top. But we know that we also need the right amount of traffic. Any other questions in the meantime, Steve? No, thanks, Matthias. I know which page you’re referring to for the limits. I just posted that. Ah, perfect. Thank you for that. Thank you for that. So, yes. Nina, there will be a recording so you can have a look at that. Again, there was just a question whether we have a recording for the session. I think now we’re about time to close. Yes. So thank you all for joining. I hope that was helpful. It was just a quick overview of the different options. We also even offer full day trainings about all the different options of the automation. So there is a bit more to discover, but I tried to give you as good overview as possible in 45 minutes and also gain a bit your interest and attention in automation. Yes, you can do a test with a to be target, but also think about what you can do beyond a B testing to achieve the next level of personalization. Have a good rest of the day. Have a good evening and thank you for joining that session.