-
-
Save mohakevin/53f52f77e008d655dde9f8457d7a2d77 to your computer and use it in GitHub Desktop.
Revisions
-
mohakevin revised this gist
Oct 16, 2016 . 1 changed file with 4 additions and 4 deletions.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -83,16 +83,16 @@ A través de monitorización fallos en carga webs y errores en servidor admin. | xx:xx | **Comienzo de la incidencia** LINK a Nagios donde se observa KO | | xx:xx | Detección del problema a través de **Nagios** | | xx:xx | más detalles... | | xx:xx | **problema mitigado**, Se consigue restablecer servicio web principal. | | 18:14 | **Eva levantado, Servicio restablecido** Se levanta el servicio Node por Adrián. | | 19:00 | **Problema de ocupación de disco reaparece**, Se detecta que sigue teniendo ocupación elevada. | 2016-10-16 | Hora | Descripción | | ---- | ----------- | | xx:xx | Comunicación a equipo de dev.| | xx:xx | **Solución** Detectan el problema, llenado disco partición | | xx:xx | Mandan pautas para evitar nuevamente este comportamiento| -
mohakevin revised this gist
Oct 16, 2016 . 1 changed file with 5 additions and 5 deletions.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -80,12 +80,12 @@ A través de monitorización fallos en carga webs y errores en servidor admin. | Hora | Descripción | | ----- | ----------- | | xx:xx | **Comienzo de la incidencia** LINK a Nagios donde se observa KO | | xx:xx | Detección del problema a través de **Nagios** | | xx:xx | más detalles... | | xx:xx | ** problema mitigado **, Se consigue restablecer servicio web principal. | | 18:14 | ** Eva levantado, Servicio restablecido ** Se levanta el servicio Node por Adrián. | | 19:00 | ** Problema de ocupación de disco reaparece **, Se detecta que sigue teniendo ocupación elevada. | 2016-10-16 -
mohakevin revised this gist
Oct 16, 2016 . 1 changed file with 2 additions and 0 deletions.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -88,7 +88,9 @@ A través de monitorización fallos en carga webs y errores en servidor admin. | 19:00 | ** Problema de ocupación de disco reaparece **, Se detecta que sigue teniendo ocupación elevada.| 2016-10-16 | Hora | Descripción | | ---- | ----------- | | xx:xx | Comunicación a equipo de dev.| | xx:xx | ** Solución ** Detectan el problema, llenado disco partición | | xx:xx | Mandan pautas para evitar nuevamente este comportamiento| -
mohakevin revised this gist
Oct 16, 2016 . 1 changed file with 1 addition and 1 deletion.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -82,7 +82,7 @@ A través de monitorización fallos en carga webs y errores en servidor admin. | ----- | ----------- | | xx:xx | **Comienzo de la incidencia** LINK a Nagios donde se observa KO| | xx:xx | Detección del problema a través de Nagios | | xx:xx | más detalles... | | xx:xx | ** problema mitigado **, Se consigue restablecer servicio web principal.| | 18:14 | ** Eva levantado, Servicio restablecido ** Se levanta el servicio Node por Adrián.| | 19:00 | ** Problema de ocupación de disco reaparece **, Se detecta que sigue teniendo ocupación elevada.| -
mohakevin revised this gist
Oct 16, 2016 . 1 changed file with 1 addition and 0 deletions.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -88,6 +88,7 @@ A través de monitorización fallos en carga webs y errores en servidor admin. | 19:00 | ** Problema de ocupación de disco reaparece **, Se detecta que sigue teniendo ocupación elevada.| 2016-10-16 | Hora | Descripción | | xx:xx | Comunicación a equipo de dev.| | xx:xx | ** Solución ** Detectan el problema, llenado disco partición | | xx:xx | Mandan pautas para evitar nuevamente este comportamiento| -
mohakevin revised this gist
Oct 16, 2016 . 1 changed file with 63 additions and 80 deletions.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -1,116 +1,99 @@ # Postmortem incidencia #Ticket-ID ## Fecha 2016-10-16 ## Autores * Carlos * Adrián * ... ## Estado Finalizado, puntos de mejora todavía en progreso. ## Resumen Problemas en el servicio a Euronics, que provocó pérdida de servicio en web xxxxx y/o servicio xxxx ## Impacto Pérdida de servicio y requests a la web www.xxxxx.es y eva.xxxx.es Pérdida econímica supuesta, no contabilizada. ## Causas principales Fallo en cascada de servicios en cliente xxxxx, debido a un problema de uso de disco. Éste provocó llenado de servidor, corrupción de base de datos y parada de servicios. Se debió llevar a cabo un reboot del servidor para recuperarlo. Debido a esta acción, adicionalmente se encontraron problemas en las tarjetas virtuales que provocó una segunda caída. ## Desencadenante Llenado de disco. ## Solución Corrección de las tarjetas virtuales de la VM que contenía el servidor admin. Una vez levantado el entorno, recuperar la bbdd y el servicio. Quedan pendiente el servicio EVA, que no ha arrancado y falla en la monitorización. Por la tarde se arranca. ## Detección A través de monitorización fallos en carga webs y errores en servidor admin. ## Puntos de acción a mejorar | Punto | Tipo | Propietario | Ticket | | ----------- | ---- | ----- | --- | | Validar monitorización entorno por completo | Mitigar / corregir | Javier | **hecho** | | Revisar arranque entorno Node | Mitigar / Corregir | Antonio | **TODO** | | Localizar problema VMWare trarjetas | Prevención | Carlos | **TODO** | | Coordinar proceso de reinicio/apagado para validación puntos previos | Javier | **TODO** | | Documentación de entorno y procesos | Javier | **TODO** | ## Lecciones aprendidas * Se debe monitorizar todos los sistemas. Incluyendo proceso. * Se debe disponer de una documentación viva, que varíe a medida que se añaden servicios. * SIEMPRE configurar arranque servicios. SIEMPRE validar con una prueba. ### Qué salió bien * Monitorización avisó de forma ágil de un problema en el entorno. * Corrección de problema de VMware rápidamente. ### Qué salió mal * No conocimiento del entorno o servicios para poder volverlo a un estado correcto. Developers han tenido que apuntar hacia el origen del problema. * Varios fallos en cascada en varias capas: de virtualización y lógica. ### Puntos afortunados * La bbdd se recuperó de forma automática * El site eva, no parece ser de gran importancia y no trascendió su fallo. ## Timeline 2016-10-15 | Hora | Descripción | | ----- | ----------- | | xx:xx | **Comienzo de la incidencia** LINK a Nagios donde se observa KO| | xx:xx | Detección del problema a través de Nagios | | xx:xx | .... | | xx:xx | ** problema mitigado **, Se consigue restablecer servicio web principal.| | 18:14 | ** Eva levantado, Servicio restablecido ** Se levanta el servicio Node por Adrián.| | 19:00 | ** Problema de ocupación de disco reaparece **, Se detecta que sigue teniendo ocupación elevada.| 2016-10-16 | xx:xx | Comunicación a equipo de dev.| | xx:xx | ** Solución ** Detectan el problema, llenado disco partición | | xx:xx | Mandan pautas para evitar nuevamente este comportamiento| ## Supporting Information * Monitoring, <http://guardian/ * Ticket si existe -
mlafeldt revised this gist
Sep 12, 2016 . 1 changed file with 41 additions and 41 deletions.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -6,7 +6,9 @@ ## Authors * jennifer * martym * agoogler ## Status @@ -72,45 +74,43 @@ Borgmon detected high level of HTTP 500s and paged on-call. 2015-10-21 (*all times UTC*) | Time | Description | | ----- | ----------- | | 14:51 | News reports that a new Shakespearean sonnet has been discovered in a Delorean's glove compartment | | 14:53 | Traffic to Shakespeare search increases by 88x after post to */r/shakespeare* points to Shakespeare search engine as place to find new sonnet (except we don't have the sonnet yet) | | 14:54 | **OUTAGE BEGINS** -- Search backends start melting down under load | | 14:55 | docbrown receives pager storm, `ManyHttp500s` from all clusters | | 14:57 | All traffic to Shakespeare search is failing: see <http://monitor/shakespeare?end_time=20151021T145700> | | 14:58 | docbrown starts investigating, finds backend crash rate very high | | 15:01 | **INCIDENT BEGINS** docbrown declares incident #465 due to cascading failure, coordination on #shakespeare, names jennifer incident commander | | 15:02 | someone coincidentally sends email to shakespeare-discuss@ re sonnet discovery, which happens to be at top of martym's inbox | | 15:03 | jennifer notifies shakespeare-incidents@ list of the incident | | 15:04 | martym tracks down text of new sonnet and looks for documentation on corpus update | | 15:06 | docbrown finds that crash symptoms identical across all tasks in all clusters, investigating cause based on application logs | | 15:07 | martym finds documentation, starts prep work for corpus update | | 15:10 | martym adds sonnet to Shakespeare's known works, starts indexing job | | 15:12 | docbrown contacts clarac & agoogler (from Shakespeare dev team) to help with examining codebase for possible causes | | 15:18 | clarac finds smoking gun in logs pointing to file descriptor exhaustion, confirms against code that leak exists if term not in corpus is searched for | | 15:20 | martym's index MapReduce job completes | | 15:21 | jennifer and docbrown decide to increase instance count enough to drop load on instances that they're able to do appreciable work before dying and being restarted | | 15:23 | docbrown load balances all traffic to USA-2 cluster, permitting instance count increase in other clusters without servers failing immediately | | 15:25 | martym starts replicating new index to all clusters | | 15:28 | docbrown starts 2x instance count increase | | 15:32 | jennifer changes load balancing to increase traffic to nonsacrificial clusters | | 15:33 | tasks in nonsacrificial clusters start failing, same symptoms as before | | 15:34 | found order-of-magnitude error in whiteboard calculations for instance count increase | | 15:36 | jennifer reverts load balancing to resacrifice USA-2 cluster in preparation for additional global 5x instance count increase (to a total of 10x initial capacity) | | 15:36 | **OUTAGE MITIGATED**, updated index replicated to all clusters | | 15:39 | docbrown starts second wave of instance count increase to 10x initial capacity | | 15:41 | jennifer reinstates load balancing across all clusters for 1% of traffic | | 15:43 | nonsacrificial clusters' HTTP 500 rates at nominal rates, task failures intermittent at low levels | | 15:45 | jennifer balances 10% of traffic across nonsacrificial clusters | | 15:47 | nonsacrificial clusters' HTTP 500 rates remain within SLO, no task failures observed | | 15:50 | 30% of traffic balanced across nonsacrificial clusters | | 15:55 | 50% of traffic balanced across nonsacrificial clusters | | 16:00 | **OUTAGE ENDS**, all traffic balanced across all clusters | | 16:30 | **INCIDENT ENDS**, reached exit criterion of 30 minutes' nominal performance | ## Supporting Information * Monitoring dashboard, <http://monitor/shakespeare?end_time=20151021T160000&duration=7200> -
mlafeldt created this gist
Jul 6, 2016 .There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -0,0 +1,116 @@ # Shakespeare Sonnet++ Postmortem (incident #465) ## Date 2015-10-21 ## Authors jennifer, martym, agoogler ## Status Complete, action items in progress ## Summary Shakespeare Search down for 66 minutes during period of very high interest in Shakespeare due to discovery of a new sonnet. ## Impact Estimated 1.21B queries lost, no revenue impact. ## Root Causes Cascading failure due to combination of exceptionally high load and a resource leak when searches failed due to terms not being in the Shakespeare corpus. The newly discovered sonnet used a word that had never before appeared in one of Shakespeare's works, which happened to be the term users searched for. Under normal circumstances, the rate of task failures due to resource leaks is low enough to be unnoticed. ## Trigger Latent bug triggered by sudden increase in traffic. ## Resolution Directed traffic to sacrificial cluster and added 10x capacity to mitigate cascading failure. Updated index deployed, resolving interaction with latent bug. Maintaining extra capacity until surge in public interest in new sonnet passes. Resource leak identified and fix deployed. ## Detection Borgmon detected high level of HTTP 500s and paged on-call. ## Action Items | Action Item | Type | Owner | Bug | | ----------- | ---- | ----- | --- | | Update playbook with instructions for responding to cascading failure | mitigate | jennifer | n/a **DONE** | | Use flux capacitor to balance load between clusters | prevent | martym | Bug 5554823 **TODO** | | Schedule cascading failure test during next DiRT | process | docbrown | n/a **TODO** | | Investigate running index MR/fusion continuously | prevent | jennifer | Bug 5554824 **TODO** | | Plug file descriptor leak in search ranking subsystem | prevent | agoogler | Bug 5554825 **DONE** | | Add load shedding capabilities to Shakespeare search | prevent | agoogler | Bug 5554826 **TODO** | | Build regression tests to ensure servers respond sanely to queries of death | prevent | clarac | Bug 5554827 **TODO** | | Deploy updated search ranking subsystem to prod | prevent | jennifer | n/a **DONE** | | Freeze production until 2015-11-20 due to error budget exhaustion, or seek exception due to grotesque, unbelievable, bizarre, and unprecedented circumstances | other | docbrown | n/a **TODO** | ## Lessons Learned ### What went well * Monitoring quickly alerted us to high rate (reaching ~100%) of HTTP 500s * Rapidly distributed updated Shakespeare corpus to all clusters ### What went wrong * We're out of practice in responding to cascading failure * We exceeded our availability error budget (by several orders of magnitude) due to the exceptional surge of traffic that essentially all resulted in failures ### Where we got lucky * Mailing list of Shakespeare aficionados had a copy of new sonnet available * Server logs had stack traces pointing to file descriptor exhaustion as cause for crash * Query-of-death was resolved by pushing new index containing popular search term ## Timeline 2015-10-21 (*all times UTC*) * 14:51 News reports that a new Shakespearean sonnet has been discovered in a Delorean's glove compartment * 14:53 Traffic to Shakespeare search increases by 88x after post to */r/shakespeare* points to Shakespeare search engine as place to find new sonnet (except we don't have the sonnet yet) * 14:54 **OUTAGE BEGINS** -- Search backends start melting down under load * 14:55 docbrown receives pager storm, `ManyHttp500s` from all clusters * 14:57 All traffic to Shakespeare search is failing: see <http://monitor/shakespeare?end_time=20151021T145700> * 14:58 docbrown starts investigating, finds backend crash rate very high * 15:01 **INCIDENT BEGINS** docbrown declares incident #465 due to cascading failure, coordination on #shakespeare, names jennifer incident commander * 15:02 someone coincidentally sends email to shakespeare-discuss@ re sonnet discovery, which happens to be at top of martym's inbox * 15:03 jennifer notifies shakespeare-incidents@ list of the incident * 15:04 martym tracks down text of new sonnet and looks for documentation on corpus update * 15:06 docbrown finds that crash symptoms identical across all tasks in all clusters, investigating cause based on application logs * 15:07 martym finds documentation, starts prep work for corpus update * 15:10 martym adds sonnet to Shakespeare's known works, starts indexing job * 15:12 docbrown contacts clarac & agoogler (from Shakespeare dev team) to help with examining codebase for possible causes * 15:18 clarac finds smoking gun in logs pointing to file descriptor exhaustion, confirms against code that leak exists if term not in corpus is searched for * 15:20 martym's index MapReduce job completes * 15:21 jennifer and docbrown decide to increase instance count enough to drop load on instances that they're able to do appreciable work before dying and being restarted * 15:23 docbrown load balances all traffic to USA-2 cluster, permitting instance count increase in other clusters without servers failing immediately * 15:25 martym starts replicating new index to all clusters * 15:28 docbrown starts 2x instance count increase * 15:32 jennifer changes load balancing to increase traffic to nonsacrificial clusters * 15:33 tasks in nonsacrificial clusters start failing, same symptoms as before * 15:34 found order-of-magnitude error in whiteboard calculations for instance count increase * 15:36 jennifer reverts load balancing to resacrifice USA-2 cluster in preparation for additional global 5x instance count increase (to a total of 10x initial capacity) * 15:36 **OUTAGE MITIGATED**, updated index replicated to all clusters * 15:39 docbrown starts second wave of instance count increase to 10x initial capacity * 15:41 jennifer reinstates load balancing across all clusters for 1% of traffic * 15:43 nonsacrificial clusters' HTTP 500 rates at nominal rates, task failures intermittent at low levels * 15:45 jennifer balances 10% of traffic across nonsacrificial clusters * 15:47 nonsacrificial clusters' HTTP 500 rates remain within SLO, no task failures observed * 15:50 30% of traffic balanced across nonsacrificial clusters * 15:55 50% of traffic balanced across nonsacrificial clusters * 16:00 **OUTAGE ENDS**, all traffic balanced across all clusters * 16:30 **INCIDENT ENDS**, reached exit criterion of 30 minutes' nominal performance ## Supporting information * Monitoring dashboard, <http://monitor/shakespeare?end_time=20151021T160000&duration=7200> --- *Source: Site Reliability Engineering: How Google Runs Production Systems. pp. 487-491*