Optimize Your Commerce Storefront
Watch 51ºÚÁϲ»´òìÈ’s Customer Technical Advisor, Andrii Abumuslimov, lead a live webinar on optimizing your Commerce instance. We cover everything you need to know to keep your storefront running efficiently, from regular cadence maintenance to preventative measures against common data blockers. This includes strategies for tracking and mitigating bot activity, as well as effective management and planning for disk space and database size.
Welcome, everyone behind the brew. Today, our amazing group of presenters will be going through everything you need to know about optimizing your e-commerce storefront. We design our webinars to be interactive, so we encourage you to ask questions in the question box throughout the presentation. We’ve also set aside the last ten minutes or so for Q&A, and we’ll do our best to answer as many of those, for you as we can. I want to quickly mention a couple of housekeeping items before we get started. We’re presenting live on 51ºÚÁϲ»´òìÈ Connect today, but don’t worry, this session is being recorded as well and can be viewed on demand and shared with other members of your team. Since you’ll be getting a recording an email from us tomorrow afternoon. I’d like to also point out that at the top of your screen, there is a black bar and an icon with a hand on it. There you can drop down and find actions that you can use throughout the presentation. So if you like what you see, feel free to applaud, laugh like, or so on. We love seeing your engagement throughout the event, so we encourage you to try out that new feature. On the next screen, there is also going to be a handout available to download. Our presenters put together a bunch of resources for you, so be sure to download that and take it with you. And lastly, as we’re closing out the webinar, we have a few survey questions that will be at the bottom of your screen. If you could just take an extra minute or so to answer those, we’d really appreciate it. So with that, I’d love to introduce myself. My name is Jeff McGuire. I’m a digital engagement strategist on our Customer Success strategy team here at 51ºÚÁϲ»´òìÈ. I’ve been at 51ºÚÁϲ»´òìÈ for a little over two years now, and have spent much of that time helping our senior events manager, Alana Cohen, organize and host these events for our customers. Part of my team at my time on this team, I worked within the 51ºÚÁϲ»´òìÈ sales organization for Creative Cloud and as well as several ad agencies throughout New York and LA. If you have any questions or comments about today’s event or your experience with 51ºÚÁϲ»´òìÈ Connect overall, please feel free to reach out. And with that, I’d like to hand it over to Andre to introduce himself.
Hi guys. My name is Andre, and, I am, customer technical advisor here on 51ºÚÁϲ»´òìÈ, mainly working with, 51ºÚÁϲ»´òìÈ Commerce platform, formerly known as Magento. And I’ve been around for about 14 years, working in different, Technical Teams, including product and support. So I have a core engineering background and, I’ll be talking today about, waste optimize your storefront and, where you can go to the agenda, where I will explain in detail what’s going to be covered. Thank you. Awesome. Thank you. Andre. And with that, let’s go ahead and jump right in.
Yeah. So after our welcome back, we are going to review, storage usage and monitoring and optimization. Beauty of web crawlers, traffic detection and mitigation topics. Those two topics we have selected because recently we’ve seen a lot of requests from clients, struggling, predict and optimize their storage usage. That involves their, cost of doing business and, we will share some best practices and approaches that can be used both on, on prem and cloud instances of 51ºÚÁϲ»´òìÈ Commerce. And we will also share some, tools that can be used exclusively on cloud to use, that platform in the most efficient way and, optimize it for your needs. The best I’m used, the web crawlers, is something that becomes more and more hot topic, especially while we are approaching an end pixel season. So we’ll talk about how to detect, the impact of such crawlers on your store. There’s also we will be some, known, legit a Google Amazon Facebook crawlers, as well as some, fast, crawlers, which just scrap it aside and can introduce some unwanted, delays on processing of requests and even cost of site cool down if they are very aggressive on your store. So we will, go over that topic and we’ll review all the ways, how those, those, crawlers and their traffic can be detected and mitigated. I will after each section, I will be presenting a demo, showing some, highlights of what I have been talking about and where to find those instruments in our platform. And then we’ll go to the Q&A session.
So storage, usage monitoring and optimization.
Many of you might already know that if you are using 51ºÚÁϲ»´òìÈ Commerce on Cloud, that is Iraq by default, three nodes available. And each node has is its own partition for files and database. And.
How that space is utilized, how much of it is allocated at the moment and how much is being used, and all the historical data for that can be found in the New Relic. You can access that information built in a custom request and recall request, and pulling it from storage sample.
For that, available there. This is a place, where the information from each node is reported roughly pretty much every minute. And due to the retention policy, will be available there for about one year. So you can pull that data for each particular mount point on your instance and have this such sort of nice chart where you will be able to see, how fast your, data size and, storage, utilization is growing. And you will be able, first of all, to predict how much storage you will be needing, in a few months, in a year or so, with, rate of your businesses growing and also, you will be able to see any sort of, Unexpected spikes in data space usage and also impact of, maintenance and best practice, approaches being applied on your data set and how they can improve, your space usage is there. So this is something that, can be run and I will show on a demo where that can be run. How exactly? Just for FYI. So this information is always available for you. There.
And this is an example of how you can estimate and check that actual data that is stored in your database. So pretty much, something that’s reported on a system level for disk usage doesn’t necessarily mean that it’s all the data that’s been used and your database actual files. So your actual database size is usually a less or much less than, disk space allocated for MySQL or MariaDB. A reason for that is specifics of, how those, my sequel and MariaDB servers work. And, if setting in the DB file per table is enabled on your. Database server and on cloud servers, it is enabled, it will mean that all the data for each table is stored in a separate file. And whenever you insert massive amounts of data in certain in the DB table and then delete it, it does not actually reclaim, a disk space on the system level. So it will be still reported by system as occupied, while in fact it’s not. And it can cause some real discrepancies in what you have ordered and what you expect. Your, disk utilization to be and what is in fact, occupied by, my sequel, server.
Please make sure to understand that this this doesn’t mean that each and every instance, each and every store will have this problem, but some of them might have. It will really depend on how you are doing business and what sort of customizations you have, how they process data for data and how a large chunk of data they upload before processing. And this particular example I provided, a snapshot of, there is much real case data that was investigated recently, or one of the clients where you can see, that custom tables where, holding pretty much up to 40GB of space on their instance, that can be claimable, but will not until, specific actions on the customer side are made, which I am going to talk in in a minute. But, for your information, all in all, the info that you need to estimate those, sizes is stored in information. Hema, of your database server. You can estimate the actual data size by just, summing up data length and index length, for entire, database or on table base level, to see footprint of, what sort of table? What kind of extension is or may be responsible for, situation to appear. And, this for your claimable disk space to grow and this, particular of data free that you can see here is what we are looking for in terms of, reclaim from the, MySQL server and bring it back to be available for, other tables or as of the data, from, your business. So.
This is just an example of my SQL requests that you can run from your simple browser or, even from a command line, that you can execute and see what, how your data size look like for your particular instance, when you have detected that certain tables, have this free reclaim of both disk space, you can, plan your, optimization of that space and, reclaiming operations. What needs to be done for that is you need to run an optimized table statement on your, database server and, mentions a table, which you want, is, space to be, reclaimed from.
Please note that such, optimization should be run only during the designated maintenance window, because this optimized table statement is a DDL operation that should never be ran on a live, working and processing orders website. So please make sure, that you are planning some maintenance windows on that scheduled cadence, like, repeatable and, running such operations to maintain your database, in the health estate without a lot of data being without a lot of disk space being locked in this, free data, partition, where it cannot be used by other tables except the one that is sold in this, of course, just make sure that you have enough disk space at the moment when the optimized table is around to hold all the data from the table, since the data in the table will be, literally recreated at the moment when you, run optimized table statement. There are there is a link also, about, this problem and why it appears in more technical terms described. And the results links will be provided as that, follow up whitepaper for this webinar. So it will have access to all of them.
Another way, how you can optimize and reduce disk usage on your commerce instance is by enabling.
Firstly, deep image optimization SS if you are on a cloud, or using fastly on your on prem instance, what is a deep image optimization and why it is important? So first of all, not to be messed up with a regular image optimization or an image optimization in deep image optimization or a little bit different things.
So tip image optimization is an approach that allows, thumbnails and resized image, that are present on your store. B and. Actually resized from the original image on the CDM file in this particular case, on the fastly side, what it means that there will be no need, for clients to, do requests to that origin server, for those cached resized image, and they will not even be generated there on a slide, you can see an example how default out of the box.
URL for some image resource look like where it, get that particular resized cached image from media catalog product cache. Directory. And that file is generated on the fly or generated when you upload, a new file to the media gallery. And it is requested later, to be displayed on the front end instead of that, when image optimization is enabled, you will have, that link to the origin files it’s uploaded and no additional, product cache files will be generated on that origin server. So, fastly again will just grab that, particular file. But as you can see, there is a list of additional get parameters which will be used as instructions for fastly again, to which dimensions and how this original file should be resized to be displayed properly on the client side. And it will be done there and returned, as a response to that, customer and cached on CDN side and it will, allow to, save disk space on the origin server and then media, mount, since there is no need to store source cached files on the origin server. So actually the gain potential gains, how much you can save of your disk space will depend obviously on your catalog. Media. Gallery size. But in average, like, for large catalogs of products with pretty much significant media, galleries can expect like.
Five, ten, 15GB of, disk space been freed up by that and also improved performance since, delivering that directly from Fast server might be much faster and generating source, cache thumbnails. On CDN layer might be much faster than generating those on the origin server. But in any case, if you are going to, Test anything or try to implement any sort of improvements that are mentioned in this session, we always strongly recommend the first best source on your development or staging environments to make sure that’s, also approaches that are described here are fully compatible with your instance and all the customizations you have, and only then transfer some to production.
From here we are coming to the, demonstration part.
Excuse me one. Moment. Or starting, screen share.
Yeah. Yeah. Okay. So, first of all, I would like to show you how to use a new relic and querying your data from there according to your needs and, built in, custom queries if you need to. So, first of all, all that, 51ºÚÁϲ»´òìÈ Commerce, cloud customers have new relic storage bundled with their, cloud instances. And integration is available for all dedicated environments such as production and staging environment. And, you have access to some basic information. There is infrastructure, etc. but what you also have is an ability to craft custom, analytical queries by clicking here where you will go to query in your data, tab, and then to the query builder where you can select different, information from different sources. Since New Relic also is used as log aggregator for the instance.
So from here, I will show you, how storage sample selection looks like, by running such query on my, particular cloud sandbox environment, I can pull all this data and you can see how, it can be done, how data can be aggregated at the bottom part. New relic has very powerful, tool of this and actual querying, where you can adjust your data the way, you want it to be. And again, mentioning, about data retention periods or some, infrastructure related logs.
In this particular case, storage pulse, data will be available for about one year. So it’s more than enough time for you to get this to historical usage on your instance and. Do some sort of extrapolation to predict how much space you will be needing in the future considering your data growth rate. And here also, you can see, that in this particular query, results are broken down to by node and specific date amount on each node. So you can do the same or aggregated in a different way, depending on how you want this data to be presented. But also here you can see a result of optimization, that being ran and data, space being reclaimed.
So this is something that you should expect to see, in bigger or smaller, manner when you do such sort of, best practice, scheduled maintenance, optimization, on regular cabins or maybe not regular cabins, but at least if you are performing some massive updates of your catalog and some, large imports of some custom data to custom table sets or later to expect to be processed. Yeah. So this is something that’s definitely worth paying attention to.
And. Next part to which we are going to switch is abusive web crawlers, traffic detection and mitigation. Okay. We have some queries. There are questions there about share and queries. And sharing information and slides. I guess, we can do that. Let’s just, postpone it until Q&A session and, we can definitely do that.
So, Abusive crawlers, traffic detection and mitigation, as I mentioned, becomes more and more, urgent request for a lot of clients. And we are doing steps on, developing some best practices and, sharing some information about approaches that can be used to detect and mitigate such traffics. First is that the first tools that you should be aware about is observation. And for 51ºÚÁϲ»´òìÈ Commerce, this is, an application, available in the, New Relic, so called NerdWallet, which I will show how to access there and what it does. It aggregates a lot of data from different sources and, represent them in a convenient manner. And, on one large dashboard that covers pretty much all the aspects of your instance, interest from the infrastructure layer, from the network layer, from the application layer, etc… And helps to get in a quick manner, Some. Main statistics, about your instance and infrastructure to see, how it, what its health is, how it’s doing and, detect issues, really fast. And what’s important without having the need to write custom SQL queries, if you do not feel, very comfortable at doing that or just learning that language so far, as a first step, you can just use whatever is available there in that as ready to use. Our reports. So how it would look like, you will see a bunch of, different, reports generated there and for what mitigation purposes and what detection traffic. What we care about on the first place is probably IP frequency and, potential. What activity there? Those two will give you, statistics on which IP, addresses hit your website. Some, most. In which time periods and, which of them might be suspicious and corresponding to different. Bot crawlers and, when they are the most active, what is the frequency with which they are hitting your site, etc… So for detection, this is one of the first steps, that can be taken. And I am gonna show, on the live demo after this session where that observation app can be found in New Relic, and also what other information is available there. Another way, how you can, work on detection of bot traffic is, doing by doing custom SQL queries and, grabbing that information from your log aggregation. Again, here we see an example of custom analytical query.
And what it does is it aggregates in this particular case bandwidth.
And specific if requests user agent either value to see what is reported by some known bots. Again, this doesn’t mean that everything that comes to your store, marked as Google bot in the request user agent header is a real Google bot. But we are gonna review in a second how to, check and detect if those are real or not. But for now, just for estimate purposes, this is one of the first steps, that can be done. And one important note there is a separate statistics. This request also grabs for a for requests. One is why it’s done here. We have noticed that a lot of bots by one or another reason generate a huge amount of bandwidth as a for a for responses. And mainly it happens when they are trying to reach some static resources that do not exist on which your 51ºÚÁϲ»´òìÈ Commerce store will respond with your regular or for, response error page. And as that page is pretty massive, that response can be like 400, 500, kilobytes. And for each resource you can imagine, it will respond with that amount of data. And that 404 response is something that’s not catchable on CD size. And it can have a significant impact on your CDM bandwidth. And that’s something that you directly related to your bill. I mean, if you are using, on prem instance, and ordering CDM directly, it will be, money that you’re paying to that CDM provider. If you are contracting with 51ºÚÁϲ»´òìÈ Commerce, it is something that’s baked in in your contract. But again, there are certain limitations that you can go over if you do not monitor properly such bot activities. And this is very important to know and help to detect. Some, not really legit bots, but something that is just trying to mask under the Google bot or Amazon etc…
In this particular case, we have a little bit messed up play out here, but. Well, okay. I was just trying to show really, that result of such request. And, what you need really to do by after is that aggregation by P is done is to check if something that reports as a Google bot in the request user agent header is really, Google bot on art. Each bots that you gonna investigate? Is it a Google bot, Amazon, Facebook or something else? They if they’re legit bots, they have the developers documentations that explicitly provides an explanation. However, verification should be done in this particular case. As you can see, when we validate the API address that’s, highlighted with red, we can see that, this does not correspond to the Google bot, domain area. This is some custom, boards that is just pretending to be a real Google bot and generates a significant amount of traffic for just one thread from a single IP generated over 150GB per month in that particular case. And those numbers can go to terabytes or even tens of terabytes in total. That might be a significant part of your CDM contract and bandwidth. So taking action on mitigating those bots is really important, not just mentioning that. If so, bots are hitting your site and decreasing your, cache hits ratio. It also affects the performance of your origin server and can cause, overload and even downtime if bot becomes very aggressive and, uncontrollably, fast. And probably the side. So to estimate and impact of the specific crawler, another custom request can be crafted where we will aggregate it, not by IP, but by a user agent.
Header value. And as you can see, there are significant numbers which, this particular crawler, from the Facebook is contributing to another website.
But there’s limitations and this is just for one, but not considering the entire block traffic that might be out there. So.
Another thing that’s worth mentioning is how your bandwidth, traffic and impact is calculated. Because when you on your site have a very high and well optimized hit ratio, those both will not affect much. But if they had a lot of pages that are not cached and, generate a lot of for a, for request, the traffic that they generate. Will. Literally be counted twice. If your sedan is configured according to the best practices. Why does that happen as the best practice for CDM configuration for fastly, at least we, recommend to use shielding. That’s Metro Server or fastly that is used as a single point of contact for all the other servers of fastly in a network. And, pretty much set server is being used as a main cache, older, for the entire network. So each fastly node doesn’t go and ask, orange and server over your other e-commerce instances for its own version of cache. But once it’s generated, it’s stored in one place and then, spread across the entire fast in the network. And each fastly node will ask that shield server for the cache if it’s available there. So if cache is not available or the request is something that is not catchable, like for a for response.
Each request will go through the local point of presence close to the customer. In our case, it might be a bot. And first it will go to a shielded server and then to the orange and all those, requests are not compressed and will be counted separately on each trip, from server to server, from origin to shielding and from shielding to the local point of presence. Server. That makes the such problem even more severe.
So mitigating it is very important and they connection with it is very important.
How to deal with crawling bots. Well, this is a difficult question because there are different situations and different bots and different bots can be, handled in a different way. If we are talking about some sort of, known, well-behaved, bots like Google bot, Facebook, pretty much robotics, the, rules that control bot behavior, where you will just say what’s allowed, what’s not, and what’s allowed rate, will be more than enough because they will just read those rules. And, I mean, a lot of clients, unfortunately, do not configure, so, rules correctly there, end up with this issue, but earlier than those bots, it’s more or less them. There are other bots that do not really behave as well, and they don’t consider their source. Robert TXT or meta tags with no index don’t follow instructions. For example, we had a lot of problems with bots from Amazon, which even in their dev documentation mentions that they might or they might not. Can see the results instructions.
And for those bots, more aggressive blocking might be needed, including a rate limiting that is available as out of the box implementation for, fastly module or some custom DCL rate limiting implementation where you can set up even more advanced and, flexible rules for mitigating of such traffic. So pretty much for Amazon, what that would be really the main way how you will instructed to decrease the traffic because, have documentation that says that you just literally need to, respond with 500, status to the port so it will decrease. Its appetite and its, rate of requests on your store. Otherwise it will just automatically adjust and it can start to really, literally detach on your website.
So, you can block that. What by, the, user agent request here or by IP list. And also you can block the same way from IP that are not recognized as, real legit, passing bots from search engines or just trying to mask to be or and pretend to be one of them. But after validation of IP address, you figured out that this is not a legit bot. So you can implement blocking by IP bicyclists and by some regions. That’s also all available as out-of-the-box functionality for fastly. And we have all the step by step instructions in our docs for that. If bots are more sophisticated or using IP spoofing, techniques and change be off.
Might be not very easy to detect.
There are multiple ways how you can still, try to prevent them. At least. It’s also what’s, harassing you work out pages, or accounts, login pages. You always can, investigate an option for using Captcha on your website. I know that it’s not always a desirable solution for some clients, because it’s kinda, drive customer experience towards the negative side, but sometimes they were still worth considering.
Or you can, start investigating on more advanced services integrations that are specialized on advanced techniques of both traffic detection and mitigation.
There are such service as human bot defender that has a native integration with the fastest service and can be, integrated with that, but that the service is something that is a standalone and it will require separate pricing to be get from, their representatives from their sales team and that’s really something that you need to, investigate and, decide if it’s worse for you going throughout another service that I’ve seen being used as a data domain also provides, advanced, protection and detection of non-human traffic. Say, in some cases can work really well. I mean, you will need to understand there is nothing that can guarantee 100% protection, but with increasing, demand and, the way outsourced crawlers can pretend to be humans and, really? Masks their appearance, using IP spoofing techniques, it’s might be really hard, to detect some automated level. So those paid services might be, worth. Considering for someone, who has a very significant problem, which is also a boss and more, basic approaches didn’t provide, very good results for them.
This screenshot provides a pretty much an overview on what out of the box features are available in fastly module for rate limiting and both mitigation. I will also show where is that was also can be found in the live demo in minutes and.
This is Pretty much what related to abuse of crawlers, protection and pass protection.
So for abusive crawlers, we have some basic configurations where you can limit their, hit ratio in your store and, accept. Exempt, known, good bots from those rules. And you can implement pass protection for the rate limiting, where you will configure specific path that you want to be under the rate limit in control. But this native implementation will not allow you to set the global rate limiting out there.
For more specific and flexible right. Limit configuration, you can use some custom fastly DCL snippet. I mean, this is an example, a little bit modified examples that can be found on fastly documentation and native fastly module integration with 51ºÚÁϲ»´òìÈ Commerce allows those custom snippets to be uploaded, from other BigCommerce admin panel.
In this particular case, you can see that you can configure some, bypass whitelist and some configuration and for rate limit check, and exceptions for specific path in this particular case, media and static file. The static files are, exempt from the validation. If, logic in VCL detects that specific IP address, going above the limits, it will block it for the like, designated period of time.
And or we need to go to the second demonstration.
So what I wanted to show here is observation panel. First of all, where is that can be found. So in your menu you can open this app tab.
And this observation app will be in this list. You can find it by filtering here observation for 51ºÚÁϲ»´òìÈ Commerce. You can find it there if you are 51ºÚÁϲ»´òìÈ Commerce and Cloud Customer. And by going to that tab you will be able to see.
All this, reports aggregated in a single dashboard.
I’m gonna set a filter to production environment. There is not much data because it’s just a sandbox and doesn’t have a lot of flight data there. But you can see what in general, it can help disaggregate, over infrastructure, statistics, different type of CPU usage, error rate, swap memory, etc… And also it aggregates a lot of statistics for different metrics for your, responses, requests and IP addresses that are hitting your site. That’s what we are mostly interested in. Scope of this, conversation. So on other life site it might be looking something like this where you will see all this statistic aggregated for you, and you can check the time period for which you want segregation to happen.
It will also show this, suspicious, kind, most likely bot activities there that you will also be able to see all the APIs that are involved there and investigate them separately.
And for out of the box fastly, full page cache, of conditionality for bot mitigation purposes, you always can go to your, store configuration section there in the advanced system section, full page cache, fastly configurations. That’s where the basic fastly integration is set up.
And here, you can find the rate limiting section.
And when it’s enabled.
You can set up explicitly a pass protection for specific parts on your store for specific controllers, etc…
Or you can configure abusive crawlers mitigation and set up parameters which you want it to have out-of-the-box.
So SAP pretty much it was a demo that they have and I guess the main material for this session, and I guess we are ready to go to Q&A session section.
Awesome. Thank you, Andre, for such a deep dive there. With that, everyone, let’s jump over to Q&A where we’ll have a couple minutes to go through some of your questions.
So, I’m gonna start us off with the first one for you. Andre, do you have any advice for managing this upcoming peak sale period? So for peak sale period, we have pretty much, general advice, advices and, doing a full, load testing in advance on your store to know, what sort of load your, server can handle. And if you can do this, load testing on staging environment, if you have a proper mirror of your data, there is great. If not, maybe to get some time during some maintenance window to run such tests, of course, doing some sort of basic optimizations. And what mitigation. Because as this problem can be, hard to handle and can add additional cost on the store during the year, but during the peak sales season, it’s, is twice more critical because you don’t want any additional load from the both traffic on your store when it’s already work, and then maximum during that high, the sales season. Just make sure that you follow all these best practices and, do optimizations in advance. Don’t do them way too close to the big sales season because you want still some sort of period of cold freeze to make sure that no changes are done in the last minute.
So from this perspective, just make sure that you have enough, hardware to handle this, that, Lord, that you expect, to happen on your side. If it’s not enough schedule, maybe an upsize explicitly of your hardware environment so you can guarantee that the expected, load can be handled according to your load testing that happened and extrapolation of its result to your expected, load numbers and concurrent users. Awesome. Thank you. Next question is, what is the best way to block bots, handle disk space, and control UI events to prevent race conditions? So, pretty much a lot of that we have recently discussed during this session.
To prevent race conditions. It’s really depends on what particular approach you need to use for rate limiting. It will mainly depend on type of, customizations such just store has.
How it works. You can always start using out of the box. The available, approach is for a limit of one specific path.
Those are quite safe because they have this limited scope of application, so they don’t apply globally. But if you need some more wide protection and apply rate limiting globally, you can do that with those custom VCL snippets and, sap something that can be.
Done. But you need to test it first in your staging environment because you have a higher risk of, blocking some legit users. If there are a lot of requests coming for some sort of static resources on your instance. You don’t want that. So you need to find the golden ratio between this conservative approach and, advanced blocking approach. So you block as much bots but do not impact on your customers. If your from them designed in a way that it does a lot of requests.
For, for resources that are also subject for, rate limiting rules.
Okay. Perfect. Thank you. Andre, really appreciate you going through those both. I want to wrap us up for today. I know we have a few minutes left. On this screen, we have a bunch of resources for you in the web link section. This includes a link to register for a live demo on how to scale your business with 51ºÚÁϲ»´òìÈ Commerce. That will be taking place on October 24th. We’ve also included recordings from our previous Commerce and Coffee webinars, as well as our events catalog, where you’ll find past and upcoming events for all of our webinar series. You’ll also see the white paper to the right. Please don’t forget to download that on your way out, as well as answer our survey questions below if you have a minute. If you have a question specific to our account, that we didn’t address today, please reach out to your solution account manager. If you’re not sure who that might be, you can reach out to me directly and I’ll put you in touch with the correct person. As a reminder, you will receive a recording of today’s event and an email from us in 24 hours. So that’s all from us today. Thank you so much for attending. We really appreciate it. Have a great rest of your day, and we look forward to seeing you at one of our upcoming events.
Presenters
- Jeff McGuire a digital engagement strategist at 51ºÚÁϲ»´òìÈ
- Andre a customer technical advisor at 51ºÚÁϲ»´òìÈ, specializing in 51ºÚÁϲ»´òìÈ Commerce (formerly Magento)
Key takeaways
The webinar focuses on optimizing e-commerce storefronts.
It is interactive, with a Q&A session at the end.
The session is recorded and will be available on demand.
-
Topics Covered
- Storage usage monitoring and optimization.
- Web crawlers, traffic detection, and mitigation.
-
Storage Optimization
- Use New Relic for monitoring storage usage.
- Implement best practices for data management and optimization.
- Enable deep image optimization to save disk space.
-
Web Crawlers
- Detect and mitigate abusive web crawlers using tools like New Relic and custom SQL queries.
- Implement rate limiting and blocking strategies for non-legit bots.
- Consider advanced services like Human Bot Defender for sophisticated bot detection.
-
Preparation for Peak Sales
- Conduct load testing in advance.
- Optimize and mitigate bot traffic to ensure smooth operation during high sales periods.
- Schedule maintenance and hardware upgrades if necessary.
-
Resources and Follow-up
- Access to a white paper and additional resources.
- Recording of the webinar will be sent via email.