Skip to main content

Site Reliability Engineering

Site Reliability Engineering (SRE) is a discipline that incorporates aspects of software engineering and applies that to operations with the goal of creating ultra-scalable and highly reliable software systems.  Google’s mastermind behind SRE, Ben Treynor, describes site reliability as “what happens when a software engineer is tasked with what used to be called operations.”

Historically, Dev teams want to release new features in a continuous manner (Change). Ops teams want to make sure that those features don’t break their stuff (Reliability). Of course the business wants both, so these groups have been incentivized very differently leading to what Lee Thompson ((formerly of E*TRADE) coined the “wall of confusion”.  This inherent conflict creates a downward spiral that creates slower feature time to market, longer deployment cycles, increasing numbers of outages, and an ever increasing amount of technical debt.

The discipline of SRE can begin to reduce this dilemma by introducing multiple analytics and statistical analyses for green- or red-lighting launches and help to resolve the extreme focus of stability vs. agility, operational work vs. software engineering and proactive vs. reactive work. These SRE teams are staffed with developer/sys-admin hybrids who not only know how to find problems, but according to Googles Melissa Binde “figure out why it happened, what was the root cause, figure out how to detect it sooner and ideally insure that it doesn’t happen again”.  Sounds a lot like ITIL’s Problem Management process only on steroids.

So at a basic level here is how I understand this works.   As we all know from years of experience, and just being human, nothing is perfect. None of our services ever really achieve 100% uptime; it’s why we invented SLAs. Take it from someone who used to write them.  This is the concept I think is just so cool.  If a team agrees to a 99.8% SLA, it gives them an “error budget” of 0.2%.  This is the maximum allowable threshold for service interruptions. The production team can utilize this error budget however they see fit and in turn release whenever and whatever they want given they are within the SLA.  They get green-lighted based on past performance. If they are operating at or below the defined SLA, all launches are red-lighted until they reduce the number of errors to a level that allows the launch to proceed. SREs (Ops) and developers (Dev) have a strong incentive to work together to minimize the number of errors. This ties completely into the cultural and professional movement known as DevOps, which stresses communication, collaboration and integration between software developers and operational professionals while automating the process of software delivery and infrastructure changes. 


For more information; www.itsmacademy.com/devops

Comments

Popular posts from this blog

What is the difference between Process Owner, Process Manager and Process Practitioner?

I was recently asked to clarify the roles of the Process Owner, Process Manager and Process Practitioner and wanted to share this with you. Roles and Responsibilities: Process Owner – this individual is “Accountable” for the process. They are the goto person and represent this process across the entire organization. They will ensure that the process is clearly defined, designed and documented. They will ensure that the process has a set of Policies for governance. Example: The process owner for Incident management will ensure that all of the activities to Identify, Record, Categorize, Investigate, … all the way to closing the incident are defined and documented with clearly defined roles, responsibilities, handoffs, and deliverables.  An example of a policy in could be… “All Incidents must be logged”. Policies are rules that govern the process. Process Owner ensures that all Process activities, (what to do), Procedures (details on how to perform the activity) and the

How Does ITIL Help in the Management of the SDLC?

I was recently asked how ITIL helps in the management of the SDLC (Software Development Lifecycle).  Simply put... SDLC is a Lifecycle approach to produce the software or the "product".  ITIL is a Lifecycle approach that focuses on the "service". I’ll start by reviewing both SDLC and ITIL Lifecycles and then summarize: SDLC  -  The intent of an SDLC process is to help produce a product that is cost-efficient, effective and of high quality. Once an application is created, the SDLC maps the proper deployment of the software into the live environment. The SDLC methodology usually contains the following stages: Analysis (requirements and design), construction, testing, release and maintenance.  The focus here is on the Software.  Most organizations will use an Agile or Waterfall approach to implement the software through the Software Development Lifecycle. ITIL  -  is a best practice for IT service management (ITSM) that focuses on aligning IT services with

Four Service Characteristics

Recently I came across several articles by researchers and experts that laid out definitions and characteristics of services. ITIL provides us with a definition that can help drive the creation of value-laden services: A means of delivering value to customers by facilitating outcomes customers want to achieve without the ownership of specific costs and risks. An area that ITIL is not so clear is in terms of service characteristics. Several researchers and experts put forth that services have four basic characteristics (IHIP): ·          Intangibility—Services are the results of actions not things. They have no physical presence and represent a logical set of elements. One way to think of service is “work done for others.” ·          Heterogeneity—Also known as “variability”; services are unique items because of the mechanisms used to deliver services-that is people. Because the people element adds variability, the service is variable. This holds true especially for th