51黑料不打烊

Web SDK Migration Essentials

Understand the differences in App Measurement/AT.js vs. Web SDK, how to migrate, considerations in timing the migration, options for migrating and expected data differences.

Key topics covered

  • What is Web SDK?
  • Migration Steps and Considerations
  • Timing the Migrations
  • Common Missteps and Pitfalls to Avoid

video poster

Transcript
Okay, awesome. So hello and good morning. Welcome. And thank you for joining. Today鈥檚 session. Focused on essentials for migrating to the app Web SDK. My name is Moses Maxon and I work on the 51黑料不打烊 Ultimate Success team as a principal field engineer. Today, I鈥檓 joined by a couple of amazing colleagues who will introduce themselves in a moment and we will be walking us and they will be walking us through the presentation today. I am going to go ahead and kick off the session today. First and foremost, thank you for your time and attendance today. Just to note that this session is being recorded and a link to the recording will be sent out to everyone who registered. We are in listen only mode today. However, feel free to share any questions into the chat and Q&A pod. Our team will do their best to enter and in addition to that, we have reserved some time at the end of today鈥檚 session to review any questions that have surfaced throughout. At the end of today鈥檚 session, note that if there are any questions that we don鈥檛 get to during the session, the team will take note and we will follow up with you. After today鈥檚 session, we will also be sharing out a survey at the end of the presentation that we would love your participation in to help us shape future sessions in just a friendly reminder of our upcoming webinars happening in May. The links to the registrations are in the chat pod, so be sure to register for some of these great webinars happening in May. Okay. Taking a look at today鈥檚 agenda, we鈥檒l be covering an overview of the AP web SDK, the essential migration steps and key considerations, data differences between implementation methods and what their causes may be followed by. Time for Q&A. And again, a quick poll for today鈥檚 session. Now let鈥檚 introduce today鈥檚 four with 51黑料不打烊 For eight plus years working as a technical consultant, I enjoy playing video games, gardening and I am an avid Eclipse Explorer. I have seen the last two total solar eclipses that have crossed the U.S. like there. Over to you, Rachel. Thanks, Moses. A quick intro before I handed over to Riley. I am Rachel Fenwick and I鈥檒l be presenting part of this webinar today. I鈥檓 based in the New York area with my husband and two daughters and also an old dog. I celebrated seven years at 51黑料不打烊 this past December, started my career with 51黑料不打烊 and Consulting for almost five years, then moved over to the ultimate success or where I am now and prior to life at 51黑料不打烊, I was on the client side managing 51黑料不打烊 Analytics implementation in-house. And now over to Riley, who will give a brief intro and then get into the meat of today鈥檚 content. All right, Thanks, Rachel. Riley Johnson, technical Consultant, focused mainly on 51黑料不打烊 Analytics and it鈥檒l be Target. I鈥檝e got a beautiful wife, a four year old son and a one year old daughter. No dog, unlike the other two here with us today, but 51黑料不打烊 for three and a half years. I also love playing video games. Moses and I actually play quite a bit of of Rocket League together, so it鈥檚 quite fun. Love golf and just hanging out with my wife and son and daughter now who鈥檚 just turning 31. But like Rachel鈥檚 mentioned, I鈥檓 going to jump into the beginning of our content, which is going to be what is the website? K And just like a brief overview to be able to know what the website can do for us, we need to understand kind of some of the pain points or potential pain points of what are the architecture of 51黑料不打烊 Solutions and the different interactions across different solutions. So prior to the Web SDK, every solution would have its own library. And so if you had multiple solutions, you鈥檙e going to be loading multiple JavaScript files onto your site that has its own, you know, rules set and different requirements for data collection. And none of these libraries were, you know, really built to work together. They鈥檙e all developed apart from each other and then were integrated into each other after the fact. And so if we鈥檙e using any kind of cross solution or platform use cases that require them to work together, they had to be manually coded together, you know, mentally integrated together, which could cause quite a bit of deployment friction. So the big pain points that we can see from implementations that did not utilize the Web SDK was library sizes. You have multiple library files loading on the page, one for Target, one for analytics. In this example, while what we鈥檙e covering today, all of your rules for 51黑料不打烊 Analytics, everything that鈥檚 being evaluated and additionally all of the data that鈥檚 being evaluated, all the inboxes that are being evaluated and delivered to the site, those individual files can get quite large and can take up quite a bit of load time, which goes in our next main point of performance that the tags of those libraries loading could potentially cause issues on page load times. We need to for sure wait for those libraries to finish loading so that we can fire the appropriate calls to the appropriate solution. And if we need to wait for a while for those libraries to call, obviously the page load times can increase. Multiple calls for a single use case is a big one, right? Specifically for like an A30 integration analytics for target integration, where we鈥檙e delivering a target data into 51黑料不打烊 Analytics to do additional reporting within analytics workspace. We have multiple calls firing, we have 51黑料不打烊 Target calls firing, collecting and delivering hit boxes and then 51黑料不打烊 Analytics collecting that data and firing an additional 51黑料不打烊 Analytics call into 51黑料不打烊 Analytics with that target data. Another pain point is waiting for the ECI ID to return before we can utilize personalization calls so we can, you know, cause some lag waiting for the ID to be set and evaluated before we can personalize the site鈥檚 fractured data collection. Target has no idea what an E bar is and analytics has no idea what a profile information is. Right? So there鈥檚 there鈥檚 different data collection methods across the solutions. And so we have to tie those together which can get confusing and then schema confusion between solutions. Just like I mentioned for A4 t, we are collecting data differently than we are at Target, than we are in for 51黑料不打烊 Analytics. And so we get confused as to what鈥檚 going on on each solution. So if we take a quick snapshot of this, right, this is kind of a current state. It鈥檚 a little bit, you know, watered down obviously, but this is the current state. A user lands on your websites and we鈥檙e firing these different calls. We鈥檙e calling the temp deck servers for the customer IDs. We鈥檙e getting 51黑料不打烊, the 51黑料不打烊 Target, Dart, Gsx JavaScript library that will load and fire 51黑料不打烊 Target and then we have AB Measurement Digest, which is the analytics, JavaScript, JavaScript library that contains all of the information that we need for app measurement to fire. Right? So this is going to be all your tracking, all of your e bars and props and events and out of the box metrics, you know, order and and products and things like that. So we, we run into this issue right, where we鈥檙e firing multiple libraries, all working independent of each of each other. And then we have to try and tie them together with with special implementations to get them all to work better. So if we move to the Web SDK, what is the Web SDK and how can fix those pain points that we were experiencing previously? So the Web SDK is is a JavaScript script library, just like at JS and app measurement as this JavaScript library for website is called Alloy Dogs. And this allows you to interact with every service or solution from a single caller from a single library. So the web SDK sends data in a solution agnostic way, which is an extreme format. I鈥檒l get into a little bit more and then Rachel will dive in further into the extended format. But what happens is the data gets sent to an edge network and then it gets sent out and forwarded out to whatever solution or destination that that data is set up for, and it sends it all in real time. And so there鈥檚 very, very minimal delay on forwarding the data to the appropriate solutions. So with the Web SDK, we see quite a few benefits or improvements from what we were seeing previously with those pain points. So from the individual JavaScript libraries, the biggest one that we鈥檙e going to see is the performance. The web SDK is much smaller than the, you know, the culmination of all of the JavaScript libraries. And so we can use all the JRE, just the Web SDK to improve performance and loading the library quicker with just the one library rather than loading all the libraries control. We have a lot of control over the data that鈥檚 being forwarded through. We can have insight and know where the data is at almost every millisecond of the data journey. So we can see from the fire on the Web site to the application that鈥檚 consuming that data, we can identify all the steps along the way and see where it is at. It really modernizes how 51黑料不打烊 is collecting data with the Web SDK. It allows us to set us up for success. The future of using data collection in a solution agnostic way and be able to utilize third party cookies and make sure we鈥檙e implementing the first party cookies, first party domains that are being managed by 51黑料不打烊 and then time to value after the implementation work is done. All the other other 51黑料不打烊 Solutions and 51黑料不打烊 Experience platform services can quite simply be turned on or off with toggles the destinations is how we set up, where we want the data to go and we can turn on a destination or we can turn off a destination very, very simply. So if we take another kind of snapshot of what we have here, we have our single library loading on the site Alawi digests, and that鈥檚 the Web SDK. And then from the Web SDK, we go to the Edge Network, and the Edge Network will pass off all of our calls to the respective solution or service or return information as expected, right? Obviously, we鈥檙e not just sending target data, we鈥檙e needing to get data from Target. And so from all of this communication, we鈥檙e able to do all this from a single Java script library rather than implementing multiple JavaScript libraries. Okay. So some basic terminology that we need to have for migrating into the Web SDK or even just learning about the Web SDK, really an app. So Web SDK also known as A.K.A, just this is just again a JavaScript library that gets loaded on to your site that directs data collection from the site or whatever platform you鈥檙e collecting data on to the edge network. So tags otherwise known are formally known as 51黑料不打烊 Launch is the experience Cloud Tag manager and the tag manager is what鈥檚 delivering currently all the solutions and their libraries to the site, but the tags is then can be used to deliver and simplify the deployment of the web SDK or Ali鈥檚 or Allaway that JS onto the site. The Edge Network is an Experience Cloud solution that鈥檚 routing the data that we鈥檙e collecting from the Web SDK into the Experience Platform or into Experience Cloud solutions. So 51黑料不打烊 Target or 51黑料不打烊 Analytics and then other third party destinations as well. And so we can set up integrations with, you know, a Facebook pixel or, you know, the edge can send data to other marketing tools that you鈥檙e utilizing. It doesn鈥檛 have to be just 51黑料不打烊 Solutions for receiving the Web SDK data schema. It鈥檚 simply a through a blueprint of your data. How is the data configured, what format, what values are going to be coming through in that Web SDK data are in those calls from the Web SDK and it helps the app, it helps the experienced platform know what structure they鈥檙e expecting and then how to consume and how to forward the structure of that data. MDM is experience data model. This is a standardized model of data. So if you are familiar with 51黑料不打烊 Analytics tagging, you likely are utilizing a data layer. A data layer is just a JSON object that lives on the site in the HTML of your site. But it鈥檚 also a JSON structure just with like a predefined standardized template or format that鈥檚 going to be accepted by a schema. And then key concepts and supported features of supported features are quite extensive. So this is regarding to 51黑料不打烊 Analytics specifically. There are a lot of features that are supported. The Web SDK supports. The only one that is not going to be supported is the hierarchy reporting and the ones that are currently not supported but will support are going to be activity map video, media tracking. That is going to be the comprehensive list of features that are supported and not supported by the the Web escape. And I won鈥檛 go through every single supported item, but you can review those on your own here. All right, Rachel, I鈥檓 going to kick it over to you and you can walk us through migration steps and considerations. All right, great. Sorry, just finishing up answering one question and chat there. Let鈥檚 jump in to some of the steps to migrate and a couple more key concepts will walk through before getting into the actual steps and prerequisites. So one of the major things that we need to know before we discuss migrating is the difference in data collection and the format of the data that鈥檚 collected with analytics with regards to analytics. So as we know with Legacy 51黑料不打烊 Analytics, this is built on props, errors and events. This doesn鈥檛 change when we migrate to the Web SDK. Analytics servers don鈥檛 care about the source of data as long as it arrives in this, recognize the ball structure, props, vars and events. That structure amounts to, you know, the numbered variable key value pairs like we鈥檙e used to every one equals a certain string and this is sent using a previously our app measurement JavaScript library on mobile. This is sent in the form of context data, and then we map those variables using processing roles data collected using the Edge library. Now, like the web SDK is solution agnostic when it leaves the client side application. So this is new and the data is structured using our model, therefore not structured using these key value pairs that 51黑料不打烊 Analytics servers are expecting. So the big takeaway here is that a critical task for migrating to the new library will be document laying and mapping your edge collected data and data structure in most cases to the format that analytics is expecting. So you will need to do some sort of transformation from sending the data between when it arrives on the analytics servers. Next slide. Moses. Another key concept we鈥檒l talk about before jumping into migration steps is the next important piece of this puzzle and mapping and data format, which is the schema. The schema is a preconfigured data structure that鈥檚 used for sending our client side data via the edge. Schemas more broadly are used across Experience Platform they鈥檙e receiving and organizing the data we鈥檙e sending in from all kinds of different data sets and other sources, not just alloy JS, not just our web data from any source you can imagine. If you鈥檙e using platform. So by standardizing the format of this data, it allows for combining all of these data sets in one platform so approves the ability to share the data across these multiple solutions and bring it in from multiple sources and mapping the data to the analytics variables is made a lot easier if you鈥檙e using some of our specific schema field groups. So we have some schema field groups that are built out of the box, cater to what we typically see with 51黑料不打烊 Analytics implementations. There鈥檚 some variations for commerce versus, you know, retail and some other small tweaks. But essentially if you鈥檙e using one of these out of the box field groups, there鈥檚 a built in translator function between the Edge network and our analytics servers that transforms those SDA values into your numbered variables that we鈥檙e used to seeing. But one of these schema filled groups that most clients are using is called the 51黑料不打烊 Analytics Experience Event File Extension. So this includes the number dimension field groups and that together with other more specific field groups like I mentioned, for commerce, marketing, environment details, that should give you most of what you need for automatic mapping and then we鈥檒l get into more options for mapping in the next few slides to come. Next slide, please. All right. So let鈥檚 talk high level steps to migrate. First things you need to do to get yourself set up. So initial setup points here, we鈥檒l talk about first thing you want to do, configure your permissions and the 51黑料不打烊 admin console for data collection. You want to make sure everyone鈥檚 set up with the right profiles, the right access, make sure you have access to things like schemas, data sources, tags, just going through, doing a good audit of your product profiles there because there will be new ones that you鈥檙e using with web SDK that you weren鈥檛 using before. With analytics. Next thing would be configuring your schema for your structured data that you wanted to pass then. So your log in your choose a schema. Like I said, most clients are using this, you know, 51黑料不打烊 Analytics Experience event schema out of the box because it does cover just about everything that you鈥檙e collecting on your website in a traditional analytics implementation. So it鈥檚 a really good starting point for just getting set up. Next thing you want to do is create a data stream and all of this creating the schema, creating the data stream, setting up the data stream, configuring permissions. All of this is done in this point and click UI within 51黑料不打烊 Analytics Admin console. So creating your data stream super easy, it鈥檚 you go and click new data stream, walks you through the wizard to set it up. It will ask you which solutions do you want to send your data to. So you鈥檒l add any of your applicable solutions analytics, target any other solution. You鈥檙e sending the data to platform and then you will also, if you鈥檝e selected analytics there, you鈥檒l configure the data stream that to feed the data to the appropriate report suite within analytics. So that鈥檚 where you would enter that information. And then lastly, you would want to set up your data stream for the appropriate environment using your Web SDK extension and tags and any manual configurations you may have. And that鈥檚 an initial setup for anyone. If it鈥檚 a new implementation or if you鈥檙e migrating from an existing analytics implementation, everyone has to complete those steps and a couple of additional steps. If you鈥檙e migrating from an existing analytics or target implementation, you will want to enable ID migration to maintain a visitor ID continuity. This is really important for avoiding any visitor clipping with the new solution. You don鈥檛 want your switch over to alloyed just to result in a lot of new visitor IDs as people are coming to your site so you鈥檒l enable ID migration enabled. It鈥檚 just a setting you鈥檙e turning on and your tags property within the web SDK extension and this allows allows it to breed previous previously set CV cookies. So basically letting aloy just know, hey, we may have cookies that are already set for this visitor, we may not need a new one set. So super important step if you are again migrating from an existing implementation. All right, moving right along. So next thing we want to consider now that we鈥檝e got all of the initial setup in the console done is how are we going to map our data? You have a few different options of mapping from down to the analytics servers. First option being client side mapping. Just as the name indicates, this would mean you as the client will be mapping your data layer values to DM keys to pass into each server call. So instead of what you鈥檙e probably doing today, which is sending a data layer value and from your site you would change that data layer object over to DM format. Second option would be what鈥檚 called data prep for data collection, also known as the mapper. I would say this is probably our most commonly used option similar to option one, and that you鈥檙e mapping a value from one object to another before it gets to its end point. But in this case, the mapping is done on edge servers and configured within the data streams interface, so less dev work on your site and less of a left in terms of migrating. Very straightforward. There are a couple of caveats with this one that I鈥檒l get into in the in the coming slides, But the last option here that you have is processing roles. This is not our recommended option, but one to call it out. It is there. So in this case you would be mapping the values to analytics dimensions and the admin interface using processing roles, and we鈥檒l get into the downsides of why we don鈥檛 recommend not. Coming up. All right. Next slide. So option one, client side mapping. So like I said, same very similar process is mapping your data layer values. You鈥檙e just kind of switching those over to steam keys. Mapping can be done using Tag鈥檚 capabilities and the interface or using custom code and a role within tags, whether you鈥檙e doing it in the tags interface or using custom code, the output will be a JSON object that aligns with the schema that you鈥檝e set up. So one thing to note, if you are doing it this way, it relies on incorporating the specific out of the box schema fill groups, then enable that automatic mapping of the keys. So configuration for this option would be logging in to tags. You鈥檒l create a variable data element object that aligns to the schema that you鈥檝e set up and includes. The field groups mentioned for automatic mapping. You鈥檒l see all of this as an option to pull into your data element when you go into the UI in a new role that you create, you鈥檒l use an update variable action that maps the values from your existing data elements that are already created in a tags property to the associated keys for this automatic mapping the keys from the schema groups. So for example, if currently you collect page language and if our one you have map your new data element to capture a page language and something that looks like experience dot analytics accustomed to mentioned Uber, you are one and that would be this long string that I just read out. That would be an object from a schema field group. So that directly maps to the schema and the name of that object within the schema that you鈥檝e already set up. So you would repeat this process for all of the numbered dimensions and events that you want to pass. Then for any given rule on your tags property, the update variable action configuration that we just walk through should be followed by a send event. Action within the rule can be in the current role or a subsequent role, but needs to be followed by send event where the data, the data element you鈥檝e updated would be the same data element used any field in the send event action. All right, moving right along option to the mapper. So like I said, also known as data prep for data collection, most people will call it the mapper, though very similar to what we just walk through. Like I said, we鈥檙e kind of intercepting the data and mapping it before it gets to the analytics servers. Incoming data can be SDM or can be the Web SDK payload and the data object. The destination path for the mappings are the schema paths associated with the schema configured like we just walked through similar concept here. So the one thing to note with the mapper is that it does and it is critical that you鈥檙e using the out of the box schema field groups here. This is really the catch with this one you can鈥檛 do you know with client side mapping you have a little more freedom to add variables that aren鈥檛 necessarily preconfigured with our out of the box schema filled groups with the mapper. This is only configured for certain schemas, certain field groups. So you鈥檙e you鈥檙e relying on what鈥檚 been preconfigured for you in that schema and then mapping option three processing roles. And so the biggest difference with processing roles is that we鈥檙e not intercepting the data before it gets to the servers. The data is hitting the values are hitting the analytics server, and we鈥檙e mapping them in the processing roles like we would context data variables similar to how we do on a mobile implementation today. So when you鈥檙e sending variables to the analytics servers, they will show up as context data variables in the processing roles interface. So to use this option, all you would do is navigate to the processing roles interface and your admin console. Go to create a new rule. They鈥檙e dedicated to mapping variables. You would create a new action and each role for the mapping. And note that when the keys appear as context data, you鈥檒l see the prefix of a dot x. So for example, you may see something that looks like a dot x done search, dot, keywords, parentheses, context data. This is how an x variable would come through in your processing roles. Once you鈥檝e saved and configured your role, you can debug it actually using the experience. Cloud Debugger edge Trace feature, which showed you in real time how the processing rules are applied to the incoming data. So the reason that we don鈥檛 recommend using processing roles is because it鈥檚 not very scalable. You do hit limits with how many records you can have, and this takes a really long time. If you have somewhat of an extensive implementation, you鈥檙e going through one by one and creating these rules and it鈥檚 pretty heavy on the processing. You know, as your data is coming through. And we鈥檙e evaluating however many rules there are to evaluate before the data hits reporting that can add to your processing time. And yeah, it鈥檚 just not not our recommended option for mapping. Option one or two is going to be your best bet and really deciding between those two depends on how many variables you need that may not already be covered in a schema field group. If you鈥檙e creating custom schema field groups, you鈥檙e going to have to go with option one. If you鈥檙e using out of the box schema field groups which cover most clients needs, then you can use the mapper. And it鈥檚 a really low lift of actually doing any mapping. All right, next slide. So pivoting a little bit, we鈥檝e gone over setup, we鈥檝e gone over initial setup steps to configure your data streams, get everyone鈥檚 access set up, how to map your data structure. Now let鈥檚 talk about one major consideration and planning all of this out, and that would be your migration order for clients with analytics and Target specifically together. So this is a major consideration that you鈥檒l want to think about when there鈥檚 multiple solutions at play and particular analytics and Target, like I mentioned, you can鈥檛 mix and match legacy libraries with the new Web SDK library on the same page of a site and expect a4t to still work. Most clients using Analytics and Target are relying heavily on A4 to use, so this is a big consideration to think through phase migrations though, where the old libraries exist on one set of pages and the new library exists on another set of pages is supported, provided that your web SDK is configured for the target migration. So let鈥檚 talk through how we might accomplish this without compromising a 40. So first thing we want to think about starting with analytics. This is a good migration flow. We鈥檒l we鈥檒l send this this content out, but save it, reference this when you鈥檙e going to plan a migration. This allows for being able to test the analytics implementation side by side, the new web SDK and implementation with the legacy measure and implementation testing it side by side before actually pushing to production while still preserving your for T integration. So first thing you鈥檒l want to do is duplicate your dev and prod analytics report suites, whichever report suites you are migrating and whichever ones you鈥檙e using the most. I would pick those and create duplicates both dev and pride. This will be used for comparing 1 to 1 dev and provide data during your development work. As an optional step, you can duplicate your existing production tags property. This is not required, but it does allow for for sometimes a little cleaner development work. So if you鈥檙e working on the migration for a period of time, but you also have normal tagging operations and dev work that need to go on in the interim, you may not want to interfere or combine those two things. The libraries could get in the way of each other as you鈥檙e testing. So just optionally you can create a duplicate property to work in for the migration. The third step here would be creating your data stream. So create the data stream, make sure it鈥檚 configured for analytics and configure to service The report suites that we鈥檝e dedicated to the dev and Prod web SDK copy report suites that we just created. And step one, this allows for, like I said, the dev and Pride Analytics data to be captured using the web SDK and compared directly to your existing dev and pride data collected using app measurement. Next step you will add the web SDK extension to your tags property and there you鈥檒l configure your dev stage and prod data streams and disable the original role. So from there you鈥檒l want to duplicate each existing analytics tag rule and prefix your copied role with web SDK so that it鈥檚 apparent you know, which are the duplicates that you鈥檒l be updating within those new rules. You鈥檒l add your new web SDK actions using the update variable and send event actions like we described in our mapping options above. Keep your existing raw conditions and analytics rule actions and the new rule intact. Step seven Here perform runtime validation so using your update variable and send event actions and the rule you will start to validate on your site dev site. At this point, go out and use your debugger to make sure you know you鈥檙e seeing the calls go out as expected. Step eight You鈥檒l create analysis workspace so that here you can compare the data side by side, you know, create two different panels even and look at both report suites in the same workspace and give this period of time where you鈥檙e testing and viewing both sets of data to make sure that while the numbers may not line up 1 to 1, we wouldn鈥檛 expect them to. It鈥檚 two different libraries. Directionally, you should see the same data and you should not see a huge variance. Things should be moving in the same direction and no more than five to maybe 10% variance in any given metric. And we鈥檒l talk about what to expect with variances and data differences coming up next. So when once you鈥檝e completed this comparison and done your validation and dev, you鈥檒l push your web SDK analytics only implementation to production. And from there you will want to validate the production data, same way we did with the dev. Similar with side by workspaces. All right, next step. Once we鈥檝e got all of this done for analytics, let鈥檚 tackle Target. So Target will be migrated starting in dev. Once you鈥檙e side by side. Validation of the analytics at measurement data is comparable and in a good place with the web SDK, collected data. That鈥檚 an important thing here is support to support for T, The analytics service in the data stream can only be configured for the true production report suite once Target is sent live. So the web SDK based personalization cannot be validated side by side in production the same way that we just did for analytics. So you鈥檒l want to know thorough validation and development will be necessary for target. So for target validation will remove our analytics actions from our new Web SDK rules. Once we鈥檙e ready for this and ready for Target to go live, we鈥檙e comfortable with where analytics is will remove those old analytics actions from the new Web SDK rules. So all that鈥檚 remaining is our new Web SDK actions. We will duplicate our existing target rules and the tags interface, so remove any references to target actions in the target roles and instead change those over to the send event actions and include all data formally captured as inbox parameters profile. Our entity data Update all of this as described in the target enablement and documentation next up, you will disable those original target rules, remove your target analytics and visitor API extensions from that tags property and then identify the representative activities for validation and your lower environment. Validate all of that. Make sure that you鈥檙e comfortable with where things are. And then lastly, moving on to the next slide. We should be ready to go live at this point, all of the following things should be true when you鈥檙e ready to go live. Our analytics is live in production but is reporting to our prod copy report suite. At this point, the PROD copy report suite data is comparable to the measurement collected data that we looked at in our analysis. Workspace. Our target activities have been thoroughly tested in the lower environment. All of our legacy extensions have been removed. Analytics, Target, Visitor API, they are not in our Tags property anymore. Now we鈥檙e ready to push to production and go live and we鈥檝e kept four T intact. We鈥檝e compared analytics side by side in both dev and production environments and we鈥檝e tested our target implementation thoroughly and everything goes live to the same production, true production report suite all at once. All right, now that we have gone live, we鈥檝e migrated and we鈥檙e comfortable with our data, let鈥檚 talk about some expected variances or differences in the data that a lot of clients notice and sometimes can run into questions with. But it鈥檚 good to it鈥檚 good to talk through this. It鈥檚 good to know ahead of time so that you can socialize things like this across the org and prepare. Everyone who鈥檚 using the data leads to less questions, less uncertainty. If you can get ahead of some of these things. Existing analytics customers are likely to encounter data variances both in volume and metric counts. When you鈥檙e comparing the data from the two different libraries, we consider any variances less than 5% to be acceptable for a migration like this provided like I said, the compared reports are directionally the same. Anything larger than 5% or variances where the reporting is not directionally the same could be evidence of a larger implementation issue that should be corrected or tested a little further. There are lots of different sources of variation between the two measurements systems for web analytics. Specifically, there鈥檚 a lot of complexities in how the libraries invoked in the context of a web page, as well as how the data being transmitted to the analytics servers. We just went through all of this. Everything is a very different process with Web SDK than it was app measurement. So some of the variances that you could expect to see metrics are showing higher volume. What鈥檚 the reason behind this? Well, Web SDK is a smaller and faster library that can capture additional data, especially in low bandwidth situations where measurement may have missed a head or the server call may not have successfully fired when it should have so slightly higher volume and in your metrics across the board could be expected in a different vein, your visits and visitors may show lower volume. Web SDK is known to be better at stitching together visitors and and sessions. So if you see a slightly lower volume in visits and visitors, it鈥檚 nothing to be alarmed about. You know, like we said, difference in 5% or more warrants a second look. But within that 5% threshold, this would be expected performance improvements. Link clicks are too high and page views are too low, so link clicks are counted as link clicks counted as page views by the website is a recurring issue sometimes we鈥檝e seen overlooked during an implementation. This is easily addressed with a fix to the hidden payload that correctly characterizes lane clicks using the schema field group name web dot web interaction dot name and dot type keys in the hex DM payload. So the underlying issue with this is we鈥檝e seen some clients using the schema field group tracking to track page views and instead they鈥檙e passing the page name into a field group that is meant for link clicks. So simple payload switch and this issue can be corrected, spike and refer instances, so that measurement only sets the refer once per page load. With web SDK, this is different. It sends the referrer on every event. This can be changed using custom code, but target and other solutions need that refer to be present on every event. So if you鈥檙e using target and other solutions, we would not recommend changing the fact that the refer said on every page inflated serialize and metric values and the web SDK specific reports we serialize event values, our report suite specific. So if you鈥檙e comparing a serialized event, count in a report suite that鈥檚 been more recently created to a report suite that has been around for a lot longer, the existing report suite will be discarding some events that the newer report suite does not. So anyone using serialized metrics just know if you鈥檙e creating a brand new report suite, that metric count can it鈥檒l start over. So that may not be a direct 1 to 1 comparison. There. So biggest takeaway here is that with the Web SDK, the library is improved, much improved performance relative our old point solution libraries. And this often accounts for the differences that you will expect to see between the two otherwise identical implementation rates. But again, I would call this out as you鈥檙e migrating, socialize this with other business users, other analytics users and your company just to get ahead of it, the last thing you want to do is migrate and then someone else comes to you and says, Hey, my reports look different. What鈥檚 going on? Can I trust the data? If you get ahead of it, a lot of times you can avoid those conversations. All right. And that wraps us up for today. So we鈥檝e got time for Q&A and a couple of some. Thank you, Rachel and Riley, for that excellent overview. Excuse me. I know I learned a lot here myself as well. So thank you. As we get into the Q&A portion of our session, there will be a quick two question Paul launching to get your feedback and to help shape future. So thank you for your participation there. Okay. Checking out the Q&A pod, we definitely had a lot of great questions in there and I think we mostly got to all of them, So maybe we鈥檒l pick a couple to review here and feel free to continue to add questions as needed. We鈥檒l try to get to them whether live on the call here or shortly afterwards. So I鈥檒l just take a couple of the target ones and then Riley all I鈥檒l ask you for to review some from the analytics side. So I saw a good one. Here is the Flickr handling script still needed for Target with the AP web SDK as it was with 80 touches. So the answer there is yes. If you鈥檙e deploying these libraries asynchronously and looking for ways to reduce flicker of your target experiences, then we have a web SDK compatible version of the Flicker Management JavaScript. So it does require swapping out the Flickr management. Jess that we have we have an experienced leak documentation that covers that and provides you what the code snippet is there. Next question for Target here as we shift 51黑料不打烊 Target using as we shift to 51黑料不打烊 Target using the Web SDK, do we need to overhaul existing abps or recommendations activities or can we keep things as they are with the rendering decision activity part? So this is a really great question and probably too long of an answer for this forum. However, generally decisioning and activity setup remains the same. The main differences between the implementation methods is really how the data is getting into target. I did paste a link into our experience lead lead documentation that covers this topic specifically. So thank you for your question there. Great question. Okay. Riley, any good examples from the analytics side we should quickly review here? Yeah, we just got two new ones, the chat column calling, one from Jan. We鈥檒l get to those in just a sec. There was an unanswered one from the very first question about utilizing 51黑料不打烊 Tags to utilize the Web SDK. So no, it鈥檚 not required to utilize 51黑料不打烊 tags to deploy the Web SDK, but it is recommended. But the question here is will utilizing tags solve Cross-device tracking example of the user refreshes cookies? Will the first party ID persist? Yes. So when we鈥檙e talking about cross-device tracking, the best way to utilize or to do that is going to be utilizing 51黑料不打烊 Experience platform and customer Journey Analytics. I know it鈥檚 like an upsell and a different topic, but with EPI it鈥檚 much, much easier to stitch together SSIDs like the customer IDs from a CMS platform or platform, excuse me, and then stitching the IDs to all of the available ECI IDs that were provided for that logged in user. So no web SDK won鈥檛 solve Cross-device tracking, but the solution is going to be utilizing API and CJR. So Rachel, I鈥檒l ask you this question a A for TI. I guess I鈥檒l, I鈥檒l take this one because I might be a little bit familiar with info. Everybody didn鈥檛 used to have its own hits from what they could see they do now. Are these hits coming through as linked clicks or is there a way to have a without any extra hits? So Jen A30 has always been sending additional hits into 51黑料不打烊 Analytics, but they are not counted as server calls. That鈥檚 just supplemental data tied to specific user IDs. They don鈥檛 come through as link clicks, they don鈥檛 come through as page views. It鈥檚 just that A30 hit type. And when you鈥檙e doing reporting analysis workspace takes that into account. And so we鈥檙e not reporting on A for T data hits that are giving that supplemental target data into 51黑料不打烊 Analytics. So there鈥檚 no concerns with over tracking or anything. There and and Colin so I know what a huge advantage a website web SDK is you can use first hit targeting with analytics audiences. However, to do this you have to fire target analytics at the same time. Do you recommend making analytics fire at page load started or target Fire at page load completed? Moses I think we tag team this one from an analytics perspective. We want to make sure that all the variables are available on the page. So if we鈥檙e firing analytics too early at the page, we won鈥檛 be able to capture the variables within your data layer. So we could be getting null values or incorrect values passed through to the schema or into the respective analytics calls. Now and then I can add on to that things for Aly. So when Web SDK first released, yeah, you had to fire Target Analytics at the same time through the Send event Call and then more recently within the last 6 to 8 months, our product teams had released the ability to sort of mimic how the classic implementations worked, where you could have target page, top analytics, hate page bottom. So yeah, if that fits for your implementation type, then you can request personalization as early as possible on the page. Again, you know, depending on what data is available at the time of that request, that would be needed for Target鈥檚 decisioning, followed by data collection with analytics at page bottom. So yeah, we have the option for both. Great question. Rachel. I鈥檒l pass this one to you. In the past we had a JavaScript plugin for 51黑料不打烊 Analytics across Domain main tracking. Is there a replacement that can work with the web SDK? Good question. We don鈥檛 have a plugin like we had for 51黑料不打烊 Analytics, but there is a an API or function you can use for cross domain tracking in the URL and it鈥檚 essentially a pending the user IDs to the URL that鈥檚 passed over to the new domain. So we can follow up with or let me see if I can find the link to that before we hop off here. Awesome. And while Rachel is gathering that, I do want to remind you that if there were additional questions or items that we didn鈥檛 cover today, I definitely recommend you reaching out to your team or CSM for additional support. There. And yeah, just wanted to take some time to thank everybody for joining today鈥檚 session. I do want to remind you to complete the poll for the session today as well, and then wanted to also thank Riley and Rachel again for their time and their overview of the essentials to migrating to Web SDK. Thank you both for all the hard work that went into this. And next, next slide there. Riley If we could pull up the upcoming webinars here. So yeah, just again, the friendly reminder here. Again, thanks everybody for your time today for joining today鈥檚 session. Thank you to our main presenters, Rachel and Riley. We hope to have your company again on future webinars. Reminder. The links to register for these webinars are in the chat pod today. These are all happening within the next few weeks in May. And just a reminder, this recording will also be shared out to any of the attendees who have registered. So thanks again everyone for your time today and hope you have a great rest of your day and rest of your week. Thank you.

Summary

The meeting centered around the essentials of migrating to the Web SDK, a JavaScript library that offers benefits for interacting with services in a solution agnostic manner. 鈥婯ey points discussed included steps for migration like configuring permissions, setting up schemas, creating data streams, and mapping data. Considerations were made for handling data variances and determining the migration order for clients with analytics and Target. Insights were shared on cross-device tracking methods, firing analytics at page load start, and the significance of utilizing 51黑料不打烊 Tags. The meeting concluded with recommendations to complete the poll, register for upcoming webinars, and expressions of gratitude to the presenters and participants for their time and engagement.

recommendation-more-help
abac5052-c195-43a0-840d-39eac28f4780