Webinar Date: July 31st, 2024
This webinar has been completed - please view the recording below.
Integrating Private GitHub Repositories in AEM Cloud Manager
During this AEM GEMs webinar, we will show how you can get the most out of adding your own private GitHub repository in Cloud Manager. We will start with the repository onboarding, which will now allow you to directly link the GitHub repository to your Cloud Manager pipelines, eliminating the need to consistently sync your code with the 51ºÚÁϲ»´òìÈ repository. Then we want to showcase how this feature allows you to shift-left your testing process, bringing the Cloud Manager code quality checks at the pull request level before the code is merged. This way, as an AEM developer, you would be able to identify the issues that would have failed your Cloud Manager build sooner in your development cycle.
Presenters
- Dragos Calin, Software Development Engineer, 51ºÚÁϲ»´òìÈ
Chat Experts
- Dan Balescu, Senior Engineering Manager, 51ºÚÁϲ»´òìÈ
- Shankari Panchapakesan, Principal Product Manager, 51ºÚÁϲ»´òìÈ
- Florin Stancu, Software Development Engineer, 51ºÚÁϲ»´òìÈ
Webinar Recording
OK, we are all set. Handing over to you, Dragos. Thank you. Thanks a lot, Goran. Hello, everyone, and welcome to today’s session. And thanks for taking the time to join this. My name is Dragos. I’m a software development engineer in AMCloud Manager. And today, I will be talking to you about one of the features that my team and I worked on in the past months. The feature is called Bring Your Own GitHub. And as the name is saying, it’s about integrating your own private GitHub repositories in Cloud Manager and all the additional functionalities that we are able to leverage once this integration was enabled and performed. OK, the agenda for today. I will start talking a bit about the previous context that we are having with repositories in Cloud Manager, and in this way, hoping to enhance and make it more clear what’s the value added by our feature. After doing so, I will continue presenting an extra functionality, the main functionality that we were able to create, which is helping you shift left in your development and testing processes by bringing Cloud Manager code quality checks at pull request level before merging your code. After that, I will show you hands-on experience with this in Cloud Manager to show you how to get the most out of this feature. Then I want to continue talking about the timeline and the lifecycle of how we implemented this. I want to do this because it was a very great experience, because the feature was focused on customers and developed with the help of the customer feedback. And it was very helpful for us, and it enabled us to reach a lot of use cases. And this is very important also for the next steps, because we want to have the same approach for the next steps on this feature and on other features as well. And as a small spoiler, next steps are involving about mainly about integrating with other Git vendors, such as Bitbucket or GitLab or other self-hosted solutions. So even if you are not using GitHub repositories, stay tuned, because we hopefully will reach your use case soon. And in the end, we will finish the Q&A session. OK, so let’s get started. We’re talking about repositories, and more specifically, repositories in Cloud Manager. They are very closely linked to the pipelines, to the Cloud Manager pipelines, meaning that you have a pipeline that you want to build the code and deploy it to some specific environments. And you use the Cloud Manager repository to tell the pipeline, take the code from this repo and this branch. And up until this feature was created, the only ones available in Cloud Manager were the 51ºÚÁϲ»´òìÈ repositories. Meaning you would go in Cloud Manager, give us a repo name, and you will create, maintain, and manage the repository for you. And the URL will look something like this. On paper, this sounds very good, because you have everything in one place, the code, the pipelines, and the environments. But it comes with the downside that you can only interact with the 51ºÚÁϲ»´òìÈ repository in Git commands. So you can clone it, put it in your ID, maybe have a bit of a better view there of branches and so on. And then you can create the branches and push those branches. But you would not have all the functionalities available on the UI of specific Git vendors. So let’s say if you are using GitHub, our repositories don’t have the option to have a thread on a pull request, where the developers are talking and discussing where they can approve, where you can configure checks on pull requests, or web hooks and security and different access levels. So if you still wanted to have this and what most of the customers do and what you also recommend in the documentation, is to have an external repository outside of 51ºÚÁϲ»´òìÈ. And because you still need to get the code from that external repo to our 51ºÚÁϲ»´òìÈ repo, synchronize it. This can mean anything from just some manual Git commands that you run in your terminal or some Jenkins job that do this thing or GitHub Actions or whatever option works for you and is implemented and administrated by you. So that’s outside of Cloud Manager. The main downside of this is that it obviously is just an extra step that you have to do in your deployment process if you want to get the code from your place to the Cloud Manager environment. And it doesn’t really bring any value. It’s just moving the code from one place to another, bringing another issue that you have two different repositories that need to be managed and that you have to make sure that they’re in sync. If you do a rollback on one, you make need to track what’s happening on the other. And this can get messy if you have multiple branches and multiple developers working on the repo. So our solution was to allow you to bring your own GitHub repo, meaning you go in Cloud Manager. And rather than giving the name of the repository, you provide the URL to your external repo. And Cloud Manager will know about it. I’ll talk about how the onboarding is done in terms of technical stuff a bit later on the demo. In this way, we removed the need for synchronization. A note, as I said, is that we only support, for now, private and public repositories that are hosted on GitHub. So the URL of your repo will have to look something like this. So if you’re having self-hosted GitHub repository, GitHub Enterprise, or you have some custom way of hosting your repo, or you’re using other vendors like Bitbucket or GitLab, for now, it will not work. But on the second part of the presentation, I’ll go on and talk about them, too.
So up until this point, we managed to, as Cloud Manager, we now know about your GitHub repository. This enables us to do some cool stuff, because it’s a two-way communication, meaning that Cloud Manager can also send data to your GitHub. And how we leverage this is that whenever a developer is creating a pull request in your repo, Cloud Manager pipeline is being triggered, a code quality pipeline. It’s being executed and is sending the results back to the pull request in GitHub, meaning that the developers can get feedback on their code changes before they merge the code. So while the code development is still in process, and they get it exactly in the context of a Cloud Manager pipeline, meaning that if you have a breaking change or a bug that would fail the Cloud Manager build, you are now able to identify this sooner in your code development cycle. To present this a bit more clear, I have this diagram where I’m showing the before state. So let’s say you are a dev. You write the code. You then open a pull request and ask for code review once the code is ready. Then the code would get merged in a branch. You would need to sync the branch with the 51ºÚÁϲ»´òìÈ repo, start the pipeline in Cloud Manager, and only after the build is done in the pipeline, you will be able to get the results of the build and the code scanning part where the sonar and the oak checks are done.
So if you have a bug introduced or an issue, let’s say you break a code quality rule that’s breaking the Cloud Manager build, and you’ll be seeing your code in the first step, you will only be able to know that happened all the way over here after a lot of steps. And this gets even more tricky if you have multiple developers that do the same thing. And maybe you don’t do this sync after every single merge of a pull request. So you don’t know specifically when you see the error in Cloud Manager. You may not know which PR caused it. With our solution, you will be able to see that this big box is moving all the way here in the left. So the developer writes the code, opens the PR, and then it will get the result from our functionalities implemented with the bring your own GitHub feature. And if something fails, they can just go back to writing code. They don’t have to do the entire cycle again. And after this, they can also do the rest of the code.
And Raghav, I think we lost your audio.
We can’t hear you anymore.
How about now? Yes, perfect. Okay. Great. Thank you. Oh, I think I also shot my camera by mistake. Sorry. No problem. Where did I cut? Because I’m not sure. It was just a couple of seconds, maybe 15 seconds. So it was right around here, so I was just talking about how if you have the bug here in the write code part, you would be able to see it immediately after you open the pull request. It’s not something that needs to be merged, synchronized, and get the results all the way later. You just get the feedback almost instantaneous. And this scales very well for multiple developers too, because now every single one of them gets feedback on their code as soon as they open the pull request. And that’s the main value being added by this functionality. So I guess that’s enough talking for now. Let’s go into the Cloud Manager UI. Okay. So, I started the demo. Let’s say I’m an AM developer, and this is my repository in GitHub where I have my code. In my case, it’s just a simple weekend project got from the template. So it’s the most basic thing you can do with AM. And now I want this repository to be in Cloud Manager. To do this, I have to go in Cloud Manager in the repository page and click on add repository. And before the feature, you just have the option to add an 51ºÚÁϲ»´òìÈ repo. Now you can also create a private one where you provide us the name. I’ll name it the same as it’s in GitHub. And you give the URL which you take from here. I don’t want to put a description. And now I save this. And you can see I also have a message. Because it’s an external repository and it’s in GitHub, we want to make sure that the person adding the URL in Cloud Manager is a person that also has right access in GitHub. So there’s two steps to validate this. The first one is to install the GitHub app in your organization. This needs to be done only once, the first time you onboard a GitHub repository. And this is the way Cloud Manager is communicating with your repository. So you install the app, which looks something like this. I don’t have to install it because I already did on other repositories in this program. It’s just a GitHub app that can be found in the marketplace that needs read permission for the code and metadata to obviously get the code and be able to run the checks on it and read and write access for checks and pull requests so it can provide the feedback I was talking about at pull request level in GitHub. You can also have more granular access levels on it. If you have multiple repositories that are not specifically for Cloud Manager use, you can select only those. So that’s the app. You install it once. And after you do this, we are now able to access your repo. Now we want to make an extra security check. We are generating a secret for you, and we are asking you to put that secret in your repository in GitHub to make sure you have access in both products. So I generate the secret. And it tells me to put it in this file in my repo. I’ll copy this. Because it’s a template, I already have the file path. Otherwise I would have to create it. And I’ll update it directly with GitHub UI. The secret is now in GitHub. And if I validate, you should see this going from this kind of warning sign to a valid environment. And everything is good now. So now my Cloud Manager knows about my GitHub repo. And if I go to pipelines, and I want to add a new pipeline, let’s say a pipeline that deploys to dev from GitHub. I can continue. And now I select dev environment. As you can see, I have the GitHub repo here. It only has one branch. But if it had multiple branches, you’d be able to select them. And now I save this. And when I start the pipeline, Cloud Manager would take the code directly from GitHub with no need to synchronize it to anything from the 51ºÚÁϲ»´òìÈ repository site. That’s the part of onboarding and integrating GitHub in Cloud Manager. Now for the shift left part, I will have to create a pull request as a named developer. And I want to make something that it’s a very clear mistake to just break the Cloud Manager checks. So I, again, just in the UI, I will add a secret hard code here.
So this should trigger a critical rule to fail in Cloud Manager and break the code scanning step.
So I commit the changes. I want to have them in a new branch. So I create a pull request. So I will call this Breaking Change.
And I create the PR. Now the PR is created. Normally, obviously, it’s something that you do in an ID, and then you push, and so on. But just for the sake of simplicity, I’m just showing this. You can see that a bunch of checks have been triggered. Some were already configured at the repository level. But the one important for us is this one. This check behind the scenes is going to start a code quality pipeline in Cloud Manager. You can see the pipeline here if you want to go to Cloud Manager to actually see it. And it’s a code quality full stack pipeline that’s running on your repository with the code from your PR and is doing the build, the unit testing, the code scanning, and the build image. Once the pipeline is finished, the results will go back in GitHub. I won’t wait for this to finish because it will just be a period with that time. So I’ll show you another repository that I onboarded in Cloud Manager with this PR, which does the same thing. It adds this secret. I’ll talk about these annotations a bit later. I want to go back to the main, to the check part. You can see the check failed. As a comment, we have this table telling us what metrics failed from the Cloud Manager perspective. And if you want to see more details, again, we go back in Cloud Manager.
And we see this execution with the second repo failing at code scanning. And if you want to review the summary, you would see that is the same stuff that was in the table in the GitHub repository in the PR. But the idea is that this happens behind the scenes. And you don’t have to go to Cloud Manager. The Cloud Manager is bringing the results back to you. Some more details to show here, I guess. It’s, as I was saying, not only we just have a check and this table, but we are also mapping the results from the CSV file, the issues found at the code scanning step. We are mapping them as comments at file level. So here, the critical vulnerability found is that the password was a hard-coded credential. And it’s breaking Cloud Manager sonar rule, meaning the check is failing.
We are also showing the issues on files that weren’t changed by the PR. So if you have technical depth, you can also see it and maybe solve it if it’s in the scope of your feature. All the issues are available, again, from Cloud Manager directly mapped in GitHub.
That’s about how the feature works. As you can see, I’m a developer making a change that would break the Cloud Manager build. I know about it as soon as the check was finished. I don’t merge the code and then run a deployment. And then I see a pipeline failing at built-in. Oh, it was my mistake. Let’s go back and fix it. You see the mistake before being merged. Now we want to make this PR check as configurable as possible. And we do this with the, let me go back to the main repo file system. We do this with this YAML configuration file. I go through every detail. So the way it works is you can configure whether the check should delete previous comments, because the way the check work is if you update the pull request, it will add another comment because it sees it as a new change. So you can either keep all those tables to have a history of your code scanning results, or you can just choose to only keep the most recent result. And then the important part here is the template one, because I know some of the customers have pipeline variables, which are like a secret needed for the build. And the pipeline is auto-generated when a PR is created doesn’t know about these variables. So you would have to point it to the program and pipeline ID. And what will happen is Cloud Manager will clone those pipeline variables once it creates the auto-generated pipeline. You can also update the name of the pipeline created, auto-generated at the PR creation, and also configure the way it behaves once an important metric is failing.
And that’s, for now, the configuration that we are able to do. One more thing that I want to say is that, obviously, this creates a lot of, let me go to the Pipelines page again. So obviously, for every pull request created, it creates a pipeline in Cloud Manager. And we know maybe that if you have multiple pull requests, this will get crowded busy very fast. So once you either merge or close up pull requests, the pipeline will get deleted cleaning up resources in Cloud Manager. So let’s just say I want to close this one, which is the AMBase repo. It’s a breaking change, so I’m closing it.
It should also delete this pipeline. Yeah, we can see that the code deleted.
Let me go to Pipelines again. So it’s no longer here, the pipeline with the code quality check. And that’s it. That’s the little bit for the demo. I want to go back to the presentation and talk about the timeline of the feature. And you will see in a bit why. We started in July with a survey asking the customers whether they’re using what Git vendor they’re using, GitHub, Bitbucket, or GitLab, or whatever. And we asked them if they would be interested in trying out this feature, if they’d find it useful for a feature that’s doing code quality checks at pull request level. And we waited for a month and got some replies. And we contacted the first group of customers that were interested and were using GitHub. And then up until November, we had a bunch of meeting mails and messages exchanges with them, talking general stuff like their vision, how they’re using AM and Cloud Manager, our vision, what they think it can be improved, what’s their most common flows and use cases, and so on. And then we also talked specifics about this feature. We showed them mock-ups. We showed them how we envisioned the flow. We asked for opinions. We asked if they’d be interested in this or not. This happened up until November, when we were finally able to release the feature as part of the early adoption program. It had limited functionalities. It was mostly focused on that pull request check part. It wasn’t that much focused on, I mean, you’re still not able to link it to any pipeline that you created. It was only working with auto-generated pipelines. So we just wanted to get it out as soon as possible to get that first group of customers to try it and allow everyone else to try it out if they wanted to be part of this early adoption program. And this opened up a very great period for us and, I would say, for the customers engaged, too. Because up until June, we had an iterative work with code deployments and reaching milestones. Around spring, we were able to finally let you actually link the repo to any pipeline. And we worked in this way with continuous feedback from the customers. So that really helped us understand the use cases and think of some scenarios outside of our happy path that we envisioned and also cover this. So for instance, we had the customer having submodules. And the feature was not working for submodules. We had to address this and implement this. Everything that you saw on that template that I was showing you, the pipeline variables, the requirement to the possibility to configure how the pipe behaves when an important metric is failing, all this was coming from customers’ feedback that would either not be that, for them, the check will fail every single time because either they had pipeline variables or they had some technical depth with important metrics failing every time. So then the check was pretty useless. So we wanted to bring value for them, too, which is why we created the template, which wasn’t in our initial plan at all. We also had the feedback on how the results should look, the comments, the deletion or not of the comments, and so on. We worked on fail fast cycles. We went to the customer saying, OK, you can try this. Maybe it fails. Let us know if the new feature is blocking you in any way. We are going to offer some workarounds if there are bugs. And this really enabled us to work very quick and cover a lot of requirements directly from the customers. And in June, we were able to release the feature to everyone. So now if you go into Cloud Manager, you should be able to see that option when you add a repository. And because it’s still July, for a couple of more hours, we reached the present time in our timeline where we have the session. So I think we can go to the next steps.
For now, we have some limitations. The GitHub repository still cannot be linked to config and web tier pipelines. And we have to obviously address this so it works in the same as an 51ºÚÁϲ»´òìÈ repository. Also, we had to address the option to reuse the build artifacts, meaning if you do two consecutive builds with the same commit, the second one should just reuse what was built in the first one. And also, the checkout annotations, the part where we comment on every line that has an issue, is not yet GA. And we are working to enable this to everyone as well. And now the more interesting part, obviously, we are in the exploration phase of offering support for other Git vendors. So you see the integration with GitHub. We want to do something similar for other vendors, GitLab, Bitbucket, or Azure hosted Sony WS, CodeCommit, the other vendors, self-hosted GitHub, and other self-hosted options. We are looking for, obviously, what’s the usage, what’s the impact, and we are looking for solutions. The main one that we are thinking is a more generic option, where we also ask you to add an access token and configure on webhook when you onboard the repository in Cloud Manager so we can approach multiple different vendors easier. But we are also exploring alternatives for the GitHub app like Atlassian is having Connect and the Forge apps, which kind of allow the same type of integration that we are using for now with bring your own GitHub. And why I’m telling all this and why I wanted to iterate over the current timeline is because the help from the customers was so good, and we want to have this for the next phases as well. So if you are using any other vendor and you want to let us know what’s your use case, maybe what other integrations you have and what’s your feedback on them, or if you just want to be the first one to try it out in the next early adoption phase, we have an email. I’ll also add the resource page if you get the slides. You have the email here. And yeah, I think that’s about it for this feature and what it can do and what was the experience so far with it. Thank you, and I think we can go to the Q&A session. Thanks a lot, Dragos, for your presentation and demo. That’s great. And thanks to the team for answering as many questions in the Q&A tab. I am going through the ones we haven’t answered now yet. The first one is, do we have an option to disable code quality, build pipeline on every pull request? So I’m assuming they would not want the check at all on the pull request.
If that’s the case, I’m not really sure if it can be configured at GitHub level, but it’s something that we might take into account.
But I’m not 100% sure, so this is why I don’t want to say. It’s not from our implementation, the template that I showed you cannot just set up disable it. But I’m open for the follow up because I would like to understand the use case and why it’s not helpful for them. Yes, please use the contextual thread, which we have posted the link to in the general chat for community interactions. Post session, you still can post your question there, and we’ll follow up there. Next question is, if we want to exclude any Java class from Jacoco code coverage, how to achieve this in 51ºÚÁϲ»´òìÈ Git? Well, I don’t think we want this more. Because what we want is to have exactly the same build and context of a normal Cloud Manager pipeline and build. So we don’t want to allow you to skip some tests on GitHub, and then they fail on Cloud Manager. So not necessarily sure how to, if that’s something that we want to support. And if it cannot be done from you from the code side, I don’t know if we can support it in our checks. OK, thank you.
Next question, let me check.
Is it supported by Amazon ECR as well? No, you had this already. It is not yet supported, right? Yeah.
It’s just GitHub for now. But again, let us know what you’re using, what other integrations you have, and we will consider it.
What is the timeline for enabling GitLab Enterprise repos? I don’t have a specific timeline to give. Going back to, as you see, it’s kind of a two phase feature, where for one part, we allow you to bring the repository in Cloud Manager and link it to the pipelines. And for the other, the other is to have the pull request. But for GitLab, merge request checks.
What I can say is the first one, if we go for the generic approach where we ask you for a token when you onboard the repo in Cloud Manager, is something we are planning to have on this up until the end of the year. For the other part, the checks, but pull requests, it will depend on how it periodize and what vendors we start first, because this is vendor specific implementation, and we have to address this to decide which one has priority, GitLab or Bitbucket, Enterprise or not. Yeah. Thank you, Drogos. The next one is a comment from Chris. Hi, Chris. Would like to be part of the evaluation for Cloud Manager and Bitbucket self-hosted testing. We can follow up on that. Yeah, definitely. Next question. Can we restrict access of GitHub code repo added to Cloud Manager if user is having Cloud Manager access, don’t want to allow downloading cloning of codes? So the way it works is we are accessing the repository completely through the GitHub app, so it’s not any authentication at user level.
And we don’t persist the code. We just clone it for the build, which is in a temporary container that gets deleted.
So the idea is that we will have to have something with read access, at least to the repository.
But since it’s not at the user level, it’s not something that I would see as a risk of being able to clone as a bad actor. But if I didn’t answer correctly, I’m up to discuss the situation more punctually. Yeah, just follow up. Post a follow up question. So next question is, can you describe any use cases for AM forms? How to achieve errors in forms before running Pipeline? Not sure if this. Yeah, since it’s not in, I mean, it’s just the checks on the AM code, the checks that are done with any build in the Pipeline. So it is not covered by a normal code quality Pipeline. It won’t be covered by the check cedar.
Thank you. And one question I want to emphasize, it has been answered, but what happens if GitHub has an outage? Who is responsible for any customers’ operational business or financial impacts of a delayed deployment? Yeah.
Well, I don’t see if there’s difference from the current setup where you just do the sync, because normally if GitHub has an outage, you can’t sync the new changes to the 51ºÚÁϲ»´òìÈ repo either. So I don’t think there’s a difference. I’m sorry. I’m not sure. I think you’re not seeing the answer. One answer from Dan was the GitHub relation is owned by the customer, so they know better what they expect. OK.
We think we have answered all questions. So please post more questions if you have any.
And in the meantime, I would like to ask you to give us feedback for this session and to complete our ending poll, where you can rate our session and also request or propose session topics for future AEMGEMS webinars. We have put the link in the general chat.
OK.
Where does the build happen before deployment, GitHub or Cloud Manager? The build happens in Cloud Manager. So GitHub is just for the checks. GitHub is just the trigger, because we receive an event when a pull request is created or updated. And once we receive this event, we start the build in a pipeline in Cloud Manager, and we just send the results in GitHub. But everything is happening in Cloud Manager in terms of build.
Thank you.
Where does production pipeline approval take place? Again, it’s in Cloud Manager. It’s just I’m assuming it’s about the case where you just swap for a pipeline. You change the code source from an 51ºÚÁϲ»´òìÈ report to a GitHub one. It’s just, yeah, Cloud Manager is just taking the code from a different place. But the rest is happening in Cloud Manager as before. So no changes on this side. It’s just the pipeline is having a different code source. OK, thanks. Is this setup only for code quality or deployment? It’s just for code quality, because we want to do the checks on the build. But you can just run your deployment pipeline with a GitHub repo, and you can just deploy it. Or maybe I’m assuming it’s about deploying a specific branch. But once you create a PR, you already have the specific branch, and you can go in Cloud Manager and start a deployment pipeline. It wouldn’t scale for every PR to deploy, because it’s just maybe one environment. Or if you don’t have maybe multiple, but it definitely won’t match the number of pull requests.
Thank you. The one question is if the recording will be shared. Yes, the recording will be shared first through the link of the contextual threads you will find in the general chat.
And the next question is, there are several questions about this. They might have come late. Is this available for the private repository, which is only available in the company internet, but not accessible over the internet? No, so let me just go back to this one, because the term private is a bit confusing.
When we say private repo, we mean private repo in the context of GitHub. But if it’s in your own hosted infrastructure, then no. So your GitHub URL looks something like this. I think that’s the simplest explanation. Like the first one, the other ones don’t work yet. Thank you. Next question is, does this approach support Git submodules and multiple repositories? Yeah, yeah. As I said, it was exactly a feedback from a customer that brought this to our attention. Initially, it wasn’t working, but now it works. It’s in one of the documentation links that I shared at the end of the presentation, how to set up for submodules using GitHub repos. And just for your awareness, we’ll also share the slides, along with the recording in the next two days.
We’re waiting for more questions to appear.
OK, it doesn’t seem to be the case. We’ll wait one more short moment before we close the session. And a final reminder to please complete our ending poll, which includes a rating, a feedback, and you can also propose topics for future sessions. Next question is, do Git changes trigger support it? I think it’s either in the last release or in the upcoming release. It’s coming. I think it’s about on-commit triggers on pipelines.
Yeah, I think, yeah, either. I don’t know exactly if you can check the release notes for the past month. If it’s not there, it’s going to be in this month.
Thank you.
Is there a future plan to deprecate the 51ºÚÁϲ»´òìÈ-hosted repo in favor of the private customer GitHub repo? The answer was no. It’s already happened by the team. Thanks. It’s obviously, we can just deprecate. I mean, it would mean maybe no one uses them at all ever. I think we kind of have a big customer base for this to happen.
Will this work for multi-tenant repositories? Not sure what they mean. I mean, having the same repo for multiple tenants in Cloud Manager, or not sure if that’s the question. Please elaborate, Praveen.
I’m curious if that’s the case. I’m not 100% sure, but I think it works the same way as for 51ºÚÁϲ»´òìÈ Repos, which is that you don’t. But if that’s the case, just I’ll check up and follow up later, definitely.
OK. We don’t get any more questions, so that’s great. One more popped up. Does this support the build of a repository which is dependent on multiple other AM repositories? How does that work? Can you repeat the first part, please? Does this support the build of a repository which is dependent on multiple other AM repositories? How does that work? OK.
Is this different than submodules? Subfields? If not, I would say yes, because it’s just the GitHub app would need to have access to all those dependent repositories as well. Otherwise, you would have to maybe have that code in an artifactory and provide the secret for the artifactory. Florin replied yes using GitHub modules. Yeah. OK. Next question is, I know you mentioned that web tier and config pipelines are not supported. What about frontend pipelines? Frontend shouldn’t work. I mean, it’s not on the checks yet, but you should be able to link a GitHub repo to any of your frontend pipelines. Thank you. And if you can’t, let us know, definitely.
I think multi-tent in same repo is this dispatcher rules per virtual host. Code would be running in same cloud AM instance. So this would work the same. So the question was, I think multi-tent in same repo is dispatcher rules per virtual host? Question mark.
Yeah.
I’m not really sure. Yeah, we should follow up. Because if you explain us the use case you currently have, we might be able to answer it. I don’t have an answer for this now. Yeah. We would need to follow up. And you would need to elaborate, Benny. Please contact us via the AM user group.
OK.
We’re going to wait a couple more seconds before we close the session. Again, please complete the ending poll just so we get your feedback.
Another question popped up, is there a way to build and deploy only one module in a module project which has the code push? So yeah, I kind of don’t see the mapping to what you currently have, like what would be different. And why? I’ll repeat because it was corrected. Is there a way to build and deploy only one module in a multi-module project which has the code push? Oh, OK. I see. So you have the big repo. And then you have a smaller one that’s the submodule. And you just push that. So Florin already replied, no, we build everything that is on the Git repository, even if the pull request has changes only on one made module.
OK. Thanks a lot to the chat experts. You’re doing a great job answering all these questions. Are there any other questions you might have? Otherwise, if you should have questions post this session, then please go to our contextual thread, which we use for community interaction. There you can still post questions or get in contact with us via the am-user-group-u-oris-vp2. With that, I would like to thank you for your attention. And thanks for joining. Thanks to Drogash for presenting and for answering questions, as well to our team of chat experts. And to everyone, have a great day or evening. Thank you, and bye-bye.
Webinar Community Interaction
-
For webinar / topic specific community interactions on this webinar on Experience League please visit the respective .
-
To receive notifications on our upcoming webinars, please register at .