DevOps Meetup Zürich, July 2021

This post is more than a year old. The contents and recommendations in this blog could be outdated.


The GitOps pattern/philosophy gained more and more popularity in the past years. We want to give you some insights into what it is capable of and how it changed our view on “software delivery”. From installing and configuring software to reconciling an application.

This talk by Christian Jakob was held on July 21, 2021 as part of the DevOps Meetup Zürich, hosted by DevOpsDays Zürich. The video can also be found on vimeo.


  • 0:00 Intro
  • 4:25 Introduction Christian Jakob
  • 6:28 A little bit of history
  • 7:55 What is GitOps
  • 10:09 A repository just for Ops?
  • 13:51 Reconciler
  • 15:10 Operators
  • 17:24 Main Benefits of GitOps
  • 22:49 Main Challenges of GitOps
  • 29:57 Git as single source of truth
  • 31:55 Demonstration Part 1
  • 36:22 Q&A Intermezzo
  • 54:12 Demonstration Part 2
  • 58:22 Q&A
  • 1:02:40 Outro

Have a look at the slides

Mentions of tools & companies

DevOpsDays Zürich

Devopsdays is a worldwide series of technical conferences covering topics of software development, IT infrastructure operations, and the intersection between them.


This is an automated transcript of the talk.

now i'm really happy to introduce you christian. christian founded the startup in the IAM business, CAOS, startup it's really interesting he's a DevOps enthusiast and besides having his own startup he's also consulting in the DevOps area and we what he talks about today is his what is GitOps and why should we use it. Christian we really look forward to your talk and the stage is yours. thanks a lot.

thank you for the introduction martin. as you said me and my colleagues we're 11 people now. we founded chaos to enter the market for IAM business in switzerland and me personally looking for all the formal owners ops business and now devops business so we automate things, we are pretty strong in the kubernetes business, we like to be lazy, we like to build microservices and GitOps was our weapon of choice to make our software happen from a ops point of view. let me try to share my screen and start the speech.

so first of all, as mentioned before, if you have any questions i would really like you to interrupt me i'm fine with that if you have detailed questions or any hints just let me know i'm fine with that um just sitting in an empty room and talking to a monitor. whatever let's get started. what is GitOps and what can it do for you.

and how do i get the presentation to the next slide. a little bit of history how we did things in the past i mean basically what devs and ops were in the past was developers called their software have some kind of a package hopefully in some kind of a package store as the factory whatever give their package to an ops person and he would install it into his system after a while things get that complicated that the need for documentation and the need for some kind of a ticket system to have a specific tasks um so people started documenting and ops person did the same but with paper. after the rise of pipelines and the disadvantage of designated continuous integration and continuous delivery servers at all we switch to a CI/CD ops kind of thing where pipelines do things have their documentation and try to install a piece of software on the target system well nowadays we have the new competitors which is of course devops you know what deflects us that's the reason we're here meaning we have cross-functional teams meaning the dev people and the ops people don't just look at their business but their main goal is to have a functional system and a functional piece of software to satisfy our customers and of course on top of that we have githubs so what is it um after searching in the internet the internet i read reconcile what is github skills is about reconcilers and operators to fully automate infrastructure as code or probably a specific operating model just for kubernetes or probably a specific tool to provision software to a kubernetes cluster or probably a fully automated way to deploy containers or just magic for container nodes most of the things i read was just container and or kubernetes focused i had a different thought about it and i just wanted to say this just reflects my personal point of view and the experience we had with the last two years where we use githubs as our main delivery method i should say

so in my opinion get up to the technical interpretation of DevOps let's look we have still different mindsets between different let's call it departments a developer cares about features code build hopefully test as well his artifacts his story points and his sprint the ops on the other side just cares about sleeping well at night the service level agreement has contact management the environment deployments security monitoring yada yada and if an application would have a mindset and dependencies he would probably care about this customers subsystems the data it needs to run a logic but it has to run the performance it has and of course the infrastructure it runs that so basically the streamline or the delivery method is always the same dev delivers hands something over to an ops person ops person enriches an environment through content management secret whatever is needed some kind of helper applications to deliver the application hopefully there's kind of a feedback cycle and testing that gives devs test data and some hints out of the running application but in general this is the way it always used to be and probably in most companies the way it is so the basic idea of GitOps is create a repository just for your target environment what environment i mean cluster it could be a kubernetes cluster it could be just a box it could be a virtual machine whatever you want to call it your environment or any environment and the main goal is you declare how your replication should look like at the target system um just like a piece of music you declare how it should sound like how it should be played and you want to be as precise as possible to store every information you need to reproduce your environment and or application think of it you have a completely empty virtual machine or a completely empty kubernetes cluster on top of that you want to declare every little piece um to make to be able to describe your application as precise as possible and reproduce it at any time as every information is declared already so basically think in application we have a core an application consists of probably ui back-end database for the database you can take any piece of state for data s free storage nfs share file system whatever so you have logic some kind of front-end some kind of back-end and this is just the basic description of your application it doesn't know where it has to run it's just which components do i need to have a functional application this could be the devs kind of view when he does his tests at this local box in addition to that you have different flavors of an application because you want to have a specific environment like development test or production and of course these environments will have different secrets different configuration different data probably different versions hopefully and of course different demands as production needs to be backup as dev might not and so on in addition to that depending on where you want to run your software you might need helpers you might need certificates reverse proxies api gateways monitoring facilities you want to have a look at the logs whatever if you use a um hyperscaler most of them will bring you their own suit like aws google azure does but if you choose to use their tools only you have your under log so it's up to you if you want to use their tools or if you want to have your own tools probably you have already established tools in your company so therefore the need to use them as well is there

so what we have we have an ops repository we defined the replication probably multiple flavors of it and if needed some help us so we still need some kind of magic to get us to our environment let's say infrastructure if it's just precise it doesn't have to be kubernetes it could be anything that's what we got well then githubs but we want something different we want the magic within the environment have a repetitive look at an ops repository have a look at the sheet speaking in music again what has been described and apply what has been described to the environment that shifts the complete deployment point of view as you're not delivering something but just telling your target system have a look there there's the description take whatever you want what you think is right for you

of course you need some kind of logic some kind of magic because a simple description doesn't make things happen there are multiple techniques within GitOps

you might use one is a reconciler basically reconciler as written as a piece of software that looks for changes in a git repository let's say every minute every two minutes and applies these changes simple as that as an example just install a piece of software as said that's the way it does it there's your description

yep was that kind of a question i was happy i guess this was more an audio flicker than a real question sorry if so just interrupt anyway so you need to introduce a reconciler at your target system as the first piece of software without um before installing anything else and tell it you know what there's your ops repository there is a specific path within your repository check it out and install what's there hopefully if you configure it right you should have an application beside that you might have the need for an operator because reconcilers are pretty good in reproducing uh the description but they don't know anything about your application they just know there's a good repository there's a path they have to watch and they apply things so you need to have something someone who knows a single application pretty well and reacts on events on that application best example is backup of a database or anything backup of your production database before you apply any new version for your specific application so the basic idea is the same you check in repository you apply a certain stage to your application but in addition to that you check events you check your status and you can do the full cycle anyway you can go that far to have a cluster checked and the change has been made by human reverse checked in into the ops repository it depends on how far you want to go we already tested that approach for some applications it works pretty well for others that is pain in the ass and a nightmare depends on you needs so which one is right

the reconciler is pretty cool if you handle a lot of applications and you want to have them installed reliable there are a lot of on the market for kubernetes the old kits on the block are argo cd and reflux which nerf merged a couple of years ago but still have two different code bases and in my opinion even if the logic is used differently ansible and terraform would have the same flavor i would just not use them as a push platform but i would use them to pull information out of a repository

the operator on the other side is just good for a single piece of application there are plenty out there for databases one of the best is the prometheus operator which completely automates your your monitoring approach or probably your rom operator if you have specific needs for your application

so what is the main benefit in a nutshell well you stop figuring not tweaking things stepping on it and pushing something instead you tell someone to pull something but you need to be very precise in what you ask for because you want to do declare your application as precise as possible as the pull is just a boxer machine and you have to be very careful about that

so it ups in a nutshell it is a form a point of collaboration because as you declare everything within a single ops repository your ops guy will take care about the status of its application within the repository and your deaf guy which sorry might be the same person as well might take care about the image versions of your application so both collaborate through get commits in a git repository each commit each status is then being interpreted by a reconciler or operator depending on your needs and then reflects in an application on the target system what does that mean well you can handle changes at your infrastructure as you know within your development process basically you can do pull requests you can do rollbacks besides stateful data which is always painful you can historically track all of your changes you made to a certain environment and you know exactly which software is it which at which environment as there is no human or documented magic that needs to be added to make software happening of course it is as transparent as possible as every information is needed in a certain git repository and um yeah it scales of needed because who says you just can't point a single cluster into a single repository. we did live migrations from one cluster and one provider to the other to the same repository and it worked out pretty fine. christian we have a question in the chat and florio is asking how would these patterns apply to a serverless architecture

that is a pretty good question i haven't thought about that and i have no idea speaking almost a serverless i mean basically if you have code as application there is no infrastructure piece where you would have your reconciler running it i mean you could have a

centralized um reconciler dealing with let's say your aws calls and or your google calls to get your software up and running but that would not be the same as you have a central component and from our interpretation of GitOps you want to avoid this central component so you need to have some kind of compute to make this reconciliation happening but at the target system with serverless never tried contact me let's give it a go. and one suggestion by adam was would it be possible to run the reconciler in lambda. weapon of choice. you can use any language or any technique you wanted to use because you have to understand it by the end of the day because you have to maintain your reconciler it's not just having that famous magic you need to have a look at it you need to have to take care about it if you as an example you do one reconciler the same when you write it yourself and it deals about the depth the test and your production environment you need to test the reconciler itself if you want to deploy it and you find your error or your mistakes within the three environments immediately you're having a bad time therefore you need to be very careful that you understand what your reconciler is doing or what's supposed to do is that okay

seems to be fine otherwise i guess people will write again or speak up by themselves this is actually quite an interesting point because i think that the polling approach would actually not work well with the serverless architecture but i i never thought about that i think it's actually because it could it could actually cause more problems than it actually solves but maybe i'm maybe i'm wrong i never tried it being honest the only thing i would suggest is not having a centralized system to have the reconciler because that that would break the pattern completely and in my opinion as you said would give more problems than solutions

okay thank you you're welcome any question is welcome with advantages of course there are challenges i mean it is a hundred percent transparent is every information is within the git repository not every company not every company culture not every person is fine with uh giving their big knowledge into a small little git repository this is the thing you need to address from a people culture point of view there's not no solution or one-size-fits-all solution people have to be aware and willing because once you break that 100 transparency pattern you're in the same business because it runs 99.9% automatically beside that secret which is only available for ticket for mr x y that that doesn't work um same thing collaboration changes people will not ssh to a target box or uh pull a trigger from the pipeline or do things like that they collaborate through commits in a designated git repository that needs time to get into the people's head it's just you need to get used to it that really takes time and as we have the advantages of git being the single point of truth it is the single point of failure as well if you have hundred clusters and you have just a single git repository probably at one of the clusters breaking pattern um you're having a bad time so you really need to make sure that your git is up and running

complexity might increase complexity will increase because once you start having the favor of reconcilers which install dozens of applications you want more therefore you will look into operator territory and it will get complex definitely it will um depending on your setup i mean it's pretty easy nowadays having reconsiders up and running in kubernetes but if you're in traditional vm world the initial effort might be a bit higher and as said before if you decide to use reconcilers and operators of course they need to be maintained they need to be tested and they need to be updated in that in a safe environment otherwise you're in hell and as a result of the declarative approach secrets management and stateful data need to be handled descriptive as well so you really need to handle these and get rid of on the old fashioned flavors hard coded secrets you know all the dirty little secrets you might have in your code

as being kubernetes nerds of course make your life easier use orchestration try kubernetes if you've never heard of it try it it's worth your time for a simple reason um containers in general for separation and the ideal world all container deliver you is logic and you need to enrich them with configurations and you need to enrich them with secrets and or with data the good thing is you can use the same container with multiple configurations just within seconds at your local environment let's say your software developer is testing as a software in exactly the same manner as you do it in your production environment this is a huge advantage of course there are trade-offs sometimes in terms of complexity but it's really worth the effort we even run our pipelines within docker and it's the same pipeline that runs at the dev box it's not really linked to github to gitlab to bamboo to jenkins it's a container you can move it anywhere it's pretty comfortable containers don't lie if you do it like that there is no works in my local machine because it's always the same claim that always the same behavior they don't lie and they're pretty honest on top of that with kubernetes and that even abstracts the whole mechanisms more and it orchestrates multiple functions automatically your load balancers your certificates your secrets your stateful data your whatever if you're used to kubernetes you know what i'm talking about in addition to that it takes care of scaling it takes care of your compute power and it takes care of the separation of concern you can really fully um separate your environments and be sure that no one messes up your production environment if you do it right um one of the more biggest advantages for GitOps with the kubernetes world is it's being configured with yaml which is decorative therefore it's pretty easy to declare an application and reconcile it and yeah as that it eases the separation between applications even more due to pots services uh service meshes i mean there are so many tools you might use with the cuban needles cluster it really is production great right now in order to do to be able to do that you need to separate your essentials so you need to do your stuff you need to separate logic from configuration from secrets and from stateful data and it's not about sometimes you need to do it 100% and 100% precise because you need to define your configuration their environment your secrets per environment and if you're dealing with stateful data as most of you will do you need to take care that your latest test database backup will end up in the test environment when you try to run your integration tests and that your latest production backup will be taken before production will be deployed so you need to really address that issue accept it and um take care of it

of course you can build your own operator which we did two years ago and the only advice i can give you beside which which language use the language you're comfortable with i wouldn't use a 500 megabyte memory consuming java monolith to install a

apache but keep it reasonable in terms of the consumption of the operator itself and the complexity and it will increase over time anyway and the strong advice do not create your own operator monster you don't want to have a monster that takes care of a monster because you need to maintain it and uh this gets with every new demand and every new functionality and all developers like features every new feature needs to be maintained and if you have your monster to control a monster you didn't win anything you need to you will spend more time checking and deploying and whatever handling the operator than the actual application

so basically what is github's git as single source of truth everything being in a ops repository is the thing that's been reflected reconcilers or operators on your target system developers as well as ops people collaborate through a single repository always there is no changing something on the box changing something on the environment every information every information is the right word that needs to be reflected on the target system a firewall change a needed apache a needed nginx whatever needs to be declared in this repository there is no hidden information beside that

so how do we collaborate we change the pool the other way around the push to a pull approach we don't push things we don't install things we install a reconciler and tell a target system to pull its things or many systems out of the same repository we declare the status of a target environment instead of deploying and following boring installation steps repetitive we automate the process as being said through reconciliation operators and by the end of the day hopefully we will always have the same results

there is no power point and i can follow that up with a bit of a practical demo if you like me to do just to give you a flavor how this looks like

that would be nice yeah thanks a lot let's have a look if that works that works almost there was a browser i'm pretty sure i guess this is due to the screen share

that looks much better okay just as a small demonstration what did i do this morning i created two kubernetes clusters at google one completely installed the other um completely empty because i didn't know how long i will talk and i connected through them let's have a look what's running right now and type the right letters

try two things an hour ago first of all i installed our own operator that's called boom boom on the other hand had a look at this github steam repository and that of yammer files called i tell them to please install the complete helpers application i should call it stack i want to have locks i want to have metrics collection i want to have monitoring i want to have a reconciler installed i want to have an api gateway yada yada you get the idea we just declare the target system's status and that's what it does basically it installed everything that's within these namespace so we have a prometheus we have a node exporters we have our login we have grafana with reasonable dashboards and we have it all configured um after i did that it took about six to seven minutes um i configured or took care about ArgoCD

which is the reconciler let's have a look first of all i needed to tell the reconciler where to look for its configuration and what what is the ops repository where do i find my crap and on top of that i ended up declaring an application this is specific to argo cd i don't want to um say which tool is the best tool it's about the idea for visualization purpose ArgoCD good for this demonstration. basically you tell them okay this is your repository you have to check out and there is a path you should have a look at and we see it's the overlay dev path so let's have a look oh there's some kubernetes stuff and there is a base description let's keep things simple if i'm too fast please interrupt me in a base description i just told

kubernetes that there is an application that consists of an http a mysql and a tomcat i would never use an apache or a tomcat within kubernetes for our software but just to keep things transparent and visible adjust use of the classic front-end back-end database approach so this is the core as i said before it's just a basic description the nginx isn't used yet on top of that i declared an overlay by the power of customize and customize as a really cool function

you tell him where his base is so it knows about the apache the tomcat as well as the mysql on top of that as we're in the dev environment it knows the name space it has to install its application and it has certain resources in addition which is the state for data claim and the name space which should be that name and in addition um the environment will have its own front-end configuration or any configuration you could use any config map just for demonstration and there is a patch and the minimal information that is needed to do a deployment in our world would be to change a version from one to another and customize will resolve first of all the base declaration will add the overlay declaration and on top of that will change through patch the image version and the reconsider will then have a look at this path fulfill what customize does and install an application that's what i did just just quickly i thought that adam raised his hand adam if you want to unmute i don't know if the question is related to this quote but no it's not related to that um i can also ask after the presentation i'm fine okay um so my questions related so there are two things basically it kept me from using GitOps so far one is how how do i cope with failure so if something goes wrong how would i fix things by hand to make the reconciler happy again never like i think that might be even even more complicated than than that approach um no it's not complicated at all because if things go wrong you say that things were right and the changes you made

were here so you did a commit in europe's repository and these reflects to the issue i know i'm not the happy part of green and really in the ideal world i should say but in general every change you made is through this repository so every arrow that reflects besides software errors of the of the reconsider they happen as well but if there is something wrong with your application reward your commitment you should be happy again um okay the the other thing is about dependencies so for example we have an application that um where you have pretty strong dependency on for example installing some specific kubernetes controllers like istio okay native and so on so i i feel that that and that that might be pretty difficult to do in a declarative fashion especially for example you need to wait until the controller is up to to really start for example sending something else otherwise you will get what is it called this

admission hooks and whatever failures right

i'm with you um service measures are a completely different beast and i don't know if they the pattern of a reconciler saying you have a couple of applications and you want to install a couple of applications um apply for things like a service mesh and istio and special because you can do pretty crazy things with istio from my point of view installing kubernetes itself i mean we installed kubernetes with an operator where we wrote some kind of a abstraction for providers for google for local providers whatever and we installed kubernetes this is possible but it took us a couple of months to have an operator that reliable to have everything functional and from that point of view we installed the the network c9 as well i mean we use calico it could be astute but if you use usd you need to configure it and as you said have some kind of a waiting point so you need to have logic and every time you need logic on top of just install this place you need an operator therefore you should be really um you should take care if you want to go that path or not it's up to you

but would you recommend it in this case if you need to install really uh operators or or or um with the experience of

one and a half to two years where we started with dirty bash scripts and now are capable of installing complete self-scaling cuban leaders justice within 40 minutes it's worth the effort it's worth the pain for you the honest answer okay thanks there are responsibilities on both sides but you need to handle it you need to take care of it and what if google decide to have the new service mesh coming around or next year you need to adapt your operator be aware of that

okay replacing one service mesh with another with git ops that definitely doesn't sound like fun if you're designed to have separate services for each of its purposes one purpose might be service mesh so your test istio against hashicorp consul whatever if you design it like that to have that separation of concern you're having a good time if you're not yes you're in hell

okay we we do have another question from etienne um and the question is and i guess there was another one before around this topic this is how to implement githubs without kubernetes so if you say in a traditional vmware based environment the simplest approach i can think of is all you need to have is some kind of a ssh keypad so you of course you need to install something to the target box let's say you have your operations team delivers you with a virtual machine so all they shall give you is either a patched vm or a pair of ssh keys you will log in there and you will install what i describe as an operator the simplest approach for me would be some kind of ansible or some kind of terraform and you tell ansible or terraform to check out a repository where ansible roles or terraform manifests are check it out and apply to the local machine and you should tell ansible let's keep things simple the simplest approach don't judge me on that a crunch up every five or any n minutes just executing the same ansible role over the same system and that no would answer but you could do anything from installing an apache to uh having a complete complex configuration all you need to do is i have a piece of software it could be a bash script all you need to do is have a repository that's reliable it installs something that is reliable for you to install whatever you need and then stop touching the box and communicating through the ops repository and make your script mechanism whatever it is complex in the moment you're the need for the complexity

does it answer your question

can i have just one comment here when you are saying you are going to use such a ansible or terraform in this case you are going to use push mechanism right no i know there they are meant to do push but no one told you that you can't use them to apply their magic to localhost to just switch their magic um in general especially if you're using terraform with a state file that needs to be somewhere i know that there are habits and or principles within these two logics that are not really get ops friendly but i have made really good experience especially with ansible and i would use it okay thank you

can you maybe uh hi this is flavio thanks thanks for the presentation i have a quick question around um basically this is always a bit the the interesting part when you say okay you declare if you have a declarative approach for your infrastructure but you could i could argue that as soon as you run your application on that infrastructure it basically is no longer mu it's basically mutated it's no longer immutable infrastructure how does a gitob's pattern maybe help in this kind of scenario let's say you have a database deployment uh and you do it through the githubs part your infrastructure stays the same but you generate a lot of locks how does the github's pattern help in such a scenario in general i disagree with you it is immutable your data isn't immutable so your application will change data and hence you could say the infrastructure it is running on the vm your application should not change the infrastructure at all but it does right it consumes memory it consumes cpus okay i'm with you um in in our world we really just think of speaking application the data that changes so it would take care of the database um and would take of course my incremental or regulary backups but i would try or use the application bit the java virtual machine the in our world uh go container what it is just logic i will always would always try to separate the logic bit from the from the stateful data bit to be able to reproduce a better sentence the other way around the focus is always to reproduce your application and the complete cluster straight out of the git it does that if that doesn't mean to um forward locks to a central lock store or cloud store or whatever or to do a s3 backup of your database and tell your application on a new installation routine to draw a dump out of it then that's the case does that answer your question

yeah kind of i'm still my my problem is i'm still a bit struggling to see the clear benefit outside let's say outside of a kubernetes cluster and i have seen githubs working very nicely but outside of that world i still struggle a bit to see the the benefit over a push-based approach um let's say you have your ci cd pipeline why would that be any or what would be the benefit can you explain that i'll give you a single one in traditional cicd based deploy pipelines people most people tend to use their pipelines meaning most of the secrets most of the configurations will especially the secrets is always the pain point they will be stored within their cd or ci system if it's gitlab you know what i'm talking about or if it's jenkins whatever if you handle that correct um the biggest benefit is that you handle your secrets in an appropriate manner and you will not have a secret hidden in a pipeline you will have the secret in a git repository as well no that is not true you can integrate secrets management you can integrate vault gitlab supports a vault integration or a secrets manager from aws or whatever you use i don't think that is actually true any longer if you handled your secrets already well on your stateful data it is up to you if you want to push it report by the end of the day if i mean the pull approach has its charm if you're reproducing

more than often and if you want to rely on a central repository and want to have a target system pulling its informations if you're deploying often if i'm with you in terms of more kubernetes-like but i wouldn't say it's only kubernetes and that regard would disagree

um regarding the push pull sort of question or i think one thing at least for myself that has been quite beneficial to see from a pull model is that there's a central place to manage everything versus a push model if your ci cd pipelines are pushing changes out if you want to revert something where do you step in to undo that work makes sense so that's that's the reason i see the reference repository as well because the repository a commit triggers a pipeline built wouldn't it be a revert of the commit that fixes it that's how i mean that's how i would do it i mean by the end of the day you're switching logic from if you call it operator reach reconcile however we want to call it you're switching logic from the target system to your pipeline that's the deal you you're talking about if that's fine for you it's fine for me but by the end of the day you need to have in a description of a piece of software anywhere and you need to have some kind of logic if you're fine having it in a pipeline i'm fine with that i prefer it having another target system i think this also speaks a little bit about the separation of operation responsibility if you have a team that is responsible for managing these services

that use of GitOps makes a lot of sense for that team whereas if you are pushing everything to the left including deployments maybe it makes more sense to have each individual team to maintain that via a push model i don't want to debate that but that is a concern that is a way of looking at things and how how do you want to maintain how do you want to manage these systems

well actually i mean i don't want to manage any system at all because i that's why i use the cloud because i actually don't want to manage those and all i want to do is actually write software and actually that is actually a question that i have as well with regards to the separation of those let's say declaration in your model everything that this kind of let's say

monitoring related logging setup related is basically in a ops repository if i understand that correctly whereas i when well where i think this is actually really something that should be part of the developer devops

kind of responsibility and hence it should actually be part of the application repository and not necessarily in a ops repository because then you actually separate it out again right devops is not about separation

that is i mean that's what that is just saying that devops isn't about separation but your arguments are about responsibility in my world we're a small team and everyone is deaf sometimes and everyone is up sometime we don't care about responsibilities we care about errors just failures and we fix them

i have an input regarding the all-in-one repository approach i mean we often times deploy several different clusters with the same application on top and we don't necessarily want to include any configuration in the same repository for that regard so yeah it might be a base configuration within the development repository but the configuration we apply to different stages might differ quite a lot and so we think of it as one repository represents a system with all it's needed to run the system without the applications code so that's just our way of thinking of that topic i mean it pretty much depends if you're saying or someone says doesn't really care about infrastructure that's fine everybody is fine to use aws under google and rely on red tools therefore you don't have the need at all to install any monitoring infrastructure so you have the responsibility at your hyperscaler and you pair them forward it's fine

okay we're i guess in the middle of q a already i guess you stopped basically showing your example if you want you can go back and continue the demo and so we can we can complete the demo and then we do a full q a session at the end okay that's if that's okay i will rush a bit it's just just a click away and most of you have working with reconsiders will know what this looks like you have your application that was being declared as a base application being enriched by an environment and the reconciler took all the yamas and just installs whatever and if you want to change something just do a quick deployment let's say we want to


to 2.4.48 to just 2.4 of our apache server i don't don't want to mess up the yaml file

and to go or give a slight hint to the to the pipelines i mean we have customers where pipelines just change this the effect by the end of the day is the same so you have a pipeline mechanism that changing the deaf infrastructure for instance so what the reconciler will now do

in a minute or after i click that button was checking out the system git repository resolve on the dependencies and hopefully it will find out that i want a new version of my front end let's have a look

let's have a wait

already done um

this is the fancy bit about kubernetes you just declare a new version and it handles the rest therefore we had by design some kind of a zero downtime deployment same applies for scaling etc so this is the new pot or the corrected version or a whatever version and i didn't do anything to the target system now that i use besides they get any command to tell kubernetes to do something that's part of the reconciler's job

basically that was the last bit i wanted to show so you have a new deployment you could change any piece any moving target within that declaration you just have to have something or someone who takes care of it and all you do for a deployment by the end of the day

is a good commit

that's it heading over to you martin thanks a lot do you have some time for a question still of course would that be okay good so any any more questions to christian thanks a lot for a great presentation huh much appreciated do you have any more questions to christian uh just one christian thank you it's really great to meet you my question is what was the mindset for your developer or for your teams when you decided uh to change from devops to GitOps how how did you how your team accept the change from devops to get ops how did you when to take how did you take the decision to change from devops to get ops what was the most risky to take what was the risk to take in this case i mean two years ago get ups was an idea being introduced by weaveworks so florian was also on the call read an article about it we had a chat and a thought about it how would it behave how would it look like or we just try it out and change the logic and i mean we're a startup so we were free happy and able to try things out and so we did the challenges we had was that there were many reconcilers on the market these days and we tested a lot of them and we tried a lot of them out and some of them have advantages in the visual area as argo i mean it looks fancy speaking honest some have advantages in the performance area like uh the one from from weaveworks the main challenge for us was to accept that we need to leave the box leave kubernetes a sleeve whatever our compute power is as it is and just collaborate through a single repository it took us time to find the mechanisms to make this happen effectively that was definitely the main challenge besides that we were a pretty devopsy team anyway before that and we didn't change our mindset at all it just changed the way we uh deploy things or collaborate or just think deployments at all because we started as an example for our home im we started with the agu i built a monster of a declaration for uh migrations of database through deployments etc it was really really really a monster and we set a step um aside from argo and decided to use an operator to fulfill all specific needs because we tried to bend these operator this reconciler more and more and more to extend that was pretty unhealthy therefore you need to know what you want which is the right tool for you and you have to have the people that are willing to join you

thank you anymore there's one interesting thought for me as well so he he wrote that if you ever reconcile it there are no temptations to tweak something manually because you know the reconciler will revert to your change immediately so this will increase reproducibility at any time and and it's good for disaster recovery cases there is right i mean our approach in general is to be able to reproduce a complete system either it's uh google or on-prem or whatever including cluster um out of that repository not just an application the complete system as fast as possible

cool are there any more questions to christian

this seems not to be the case so christian thank you very lo thank you very much for sharing your thoughts it was really a very interesting topic.

thanks a lot for that we as i mentioned before if you want to see christian life if you want to have further discussions with him there's a good chance to meet him at death upstairs in winter tour 7th and 8th of september this will be if you're tired of online meetings this will be again a physical conference in winter to get your tickets now if you don't want to wait until 7th or 8th of september we do have on the 11th of august we have another meet up this time it's the most customer driven contract testing it's already published on the meetup platform as well and i will upload the video of today's talk latest by day after tomorrow so if you want to see something then check out the video as well and with that i wish you all a very nice evening and hope to see you soon thanks a lot see ya bye

Liked it? Share it!