Each day at our company, developers are required to document their activities, painstakingly jotting down their daily work and future plans. A monotonous chore that I just really dislike.
So now, there's a scribe for that :
Each day at our company, developers are required to document their activities, painstakingly jotting down their daily work and future plans. A monotonous chore that I just really dislike.
So now, there's a scribe for that :
John Belmonte, 2022-Sep
I've started writing a toy structured concurrency implementation for the Lua programming language. Some motivations:
So what is structured concurrency? For now, I'll just say that it's a programming paradigm that makes managing concurrency (arguably the hardest problem of computer science) an order of magnitude easier in many contexts. It achieves this in ways that seem subtle to us—clearly so, since its utility didn't reach critical mass until around 2018[^sc_birth] (just as control structures like functions, if, and while weren't introduced to languages until long after the first compu
| hack_datasus <- function(sistema, modalidade, tipo_arquivo, ano, UF, mes){ | |
| #Função gera dataframe a partir de ftp feita na página do datasus | |
| #sistema ex:'SIHSUS' Verificar os sistemas disponíveis em http://www2.datasus.gov.br/DATASUS/index.php?area=0901&item=1 | |
| #modalidade 'dados' | |
| #tipo_arquivo ex: 'RD'#Varia conforme o sistema | |
| #ano ex: 17 Dois últimos dígitos do ano | |
| #UF ex:'AL' Sigla de UF Brasileira | |
| #mes ex:'12' strings entre 01 e 12 |
Apache Kafka is a highly-scalable publish-subscribe messaging system that can serve as the data backbone in distributed applications. With Kafka’s Producer-Consumer model it becomes easy to implement multiple data consumers that do live monitoring as well persistent data storage for later analysis. You can assume that RabbitMQ is similar to Kafka, You do get an option for message-sending or Broadcasting. The installation process for OSX is as below.
The best way to install the latest version of the Kafka server on OS X and to keep it up to date is via Homebrew.
brew search kafka
brew install kafka
The above commands will install a dependency called zookeeper which is required to run kafka. Start the zookeeper service.
zkserver start
| class Success(object): | |
| def __init__(self, value): | |
| self.value = value | |
| class Error(object): | |
| def __init__(self, value): | |
| self.value = value | |
| class wrapper(object): | |
| def __init__(self, result): |
| let rows = {} | |
| export default function(props = [], state = []) { | |
| return function(target) { | |
| const proto = Object.create(target.prototype) | |
| proto.shouldComponentUpdate = function(newProps, newState) { | |
| let id = (this._update_id = this._update_id || Math.random()) |
| import asyncio | |
| import random | |
| q = asyncio.Queue() | |
| async def producer(num): | |
| while True: | |
| await q.put(num + random.random()) | |
| await asyncio.sleep(random.random()) |
| #http://geekgirl.io/concurrent-http-requests-with-python3-and-asyncio/ | |
| Concurrent HTTP Requests with Python3 and asyncio | |
| My friend who is a data scientist had wipped up a script that made lots (over 27K) of queries to the Google Places API. The problem was that it was synchronous and thus took over 2.5hours to complete. | |
| Given that I'm currently attending Hacker School and get to spend all day working on any coding problems that interests me, I decided to go about trying to optimise it. | |
| I'm new to Python so had to do a bit of groundwork first to determine which course of action was best. |