- Core idea: Chow proposes an automata-theoretic testing method for software designs modeled as finite-state machines (FSMs). Tests are derived from the model itself, not from a prototype.
- Publication: IEEE Transactions on Software Engineering, Vol. SE-4(3), May 1978, pp. 178–187.
- Impact: Cited over 2,000 times; canonical in FSM-based testing and the origin of the W-method lineage.
This document lists blog posts and articles exploring the use of AI, NLP, and Hugging Face models in software quality assurance (QA) and software testing contexts.
Explores how NLP can help automatically generate user stories and test scenarios from unstructured inputs, reducing manual effort in QA.
Iterative planning, code generation, and code review. This is my observation as to what works when developing code using a large language model (mostly Claude).
Iterative planning, code generation, and code review. This is my observation as to what works when developing code using a large language model (mostly Claude).
It is interesting to independently rediscover that the best approach to writing code is to work in rapid iterations on small, well-reviewed improvements. Because large, detailed, and complicated prompts do not work well, or at least not always.
The academic literature refers to this as “prompt chaining,” and there are numerous documented cases where iterative prompting yields better results than so-called one–shot prompting with an extensive and detailed prompt. There are also lots of examples where prompt chaining across models delivers a great result.
This aligns with what my team has found as we increasingly use models to write code. We began by trying to keep our work at the forefro
This document consolidates highly cited foundational papers and their citing works relevant to cross-model prompt chaining across different LLM families (e.g., GPT, Claude, Qwen). Each entry includes a link to its source.
- [AI Chains (CHI’22)][ai-chains] — formalizes prompt chaining; tooling makes swapping steps/models straightforward.
- [Prompt Chaining vs Stepwise (Findings ACL’24)][prompt-stepwise] — chaining empirically outperforms single long prompts; supports staged flows that can be mapped onto different models.
LangChain began as an open-source project in October 2022, created by Harrison Chase.
According to historical documentation, LangChain was launched in October 2022 as an open source project by Harrison Chase.
Early commit records show that The first commit to LangChain demonstrated its beginnings as a light wrapper around Python’s formatter.format.
| (defadvice hippie-expand (around hippie-expand-case-fold) | |
| "Try to do case-sensitive matching (not effective with all functions)." | |
| (let ((case-fold-search nil)) | |
| ad-do-it)) | |
| (ad-activate 'hippie-expand) | |
| (defun arcane/hippie-expand-completions (&optional hippie-expand-function) | |
| "Use menus to show the possible hippie-expand completions | |
| instead of making me guess. Because I got REALLY sick of | |
| pounding on M-/ |
This document summarizes practices and financial evaluation approaches that are applied before deployment of AI models in the financial sector. The focus is on validation during training, testing, or pre-release stages.
| Validation / Pre-release Activity | What’s Done / Measured | Why It Matters in Finance Context |
|---|
| sudo snap install languagetool | |
| sudo apt -y install ack \ | |
| bat \ | |
| batcat \ | |
| chktex \ | |
| cowsay \ | |
| emacs \ | |
| entr \ | |
| fzf \ |
| # Bump the minor version of the git tag assuming the version tags are formatted like v0.0.0 | |
| git tag v$(( $(git tag | sort -V | tail -n1 | cut -d. -f1))).$(( $(git tag | sort -V | tail -n1 | cut -d. -f2))).$(( $(git tag | | |
| sort -V | tail -n1 | cut -d. -f3) + 1)) -m 'Bump minor version from v1.2.3 to v1.2.4' | |