Blog
-

How to Create Vector Embeddings in Python
When you’re building a retrieval-augmented generation (RAG) app, the first thing you need to do is prepare your data. You need to:
-

Troubles with multipart form data and fetch in Node.js
This is one of those cathartic blog posts. One in which I spent several frustrating hours trying to debug something that really should have just worked. Once I had finally found out what was going on I felt that I had to write it all down just in case someone else is out there dealing with the same issue. So if you have found yourself in a situation where using
fetchin Node.js for amultipart/form-datarequest doesn’t work, this might help you out. -

Clean up HTML Content for Retrieval-Augmented Generation with Readability.js
Scraping web pages is one way to fetch content for your retrieval-augmented generation (RAG) application. But parsing the content from a web page can be a pain.
-

Shallow clones versus structured clones
Have you ever had one of those times when you think you’re doing everything right, yet still you get an unexpected bug in your application? Particularly when it is state-related and you thought you did everything you could to isolate the state by making copies instead of mutating it in place.
-

How to Create Vector Embeddings in Node.js
When you’re building a retrieval-augmented generation (RAG) app, job number one is preparing your data. You’ll need to take your unstructured data and split it up into chunks, turn those chunks into vector embeddings, and finally, store the embeddings in a vector database.
-

How to Chunk Text in JavaScript for Your RAG Application
Retrieval-augmented generation (RAG) applications begin with data, so getting your data in the right shape to work well with vector databases and large language models (LLMs) is the first challenge you’re likely to face when you get started building. In this post, we’ll discuss the different ways to work with text data in JavaScript, exploring how to split it up into chunks and prepare it for use in a RAG app.
-

How Using Fetch with the Streams API Gets You Faster UX with GenAI Apps
Generative AI enables us to build incredible new types of applications, but large language model (LLM) responses can be slow. If we wait for the full response before updating the user interface, we might be making our users wait more than they need to. Thankfully, most LLM APIs—including OpenAI, Anthropic, and Langflow provide streaming endpoints that you can use to stream responses as they are generated. In this post, we’re going to see how to use JavaScript’s
fetchAPI to immediately update your front-end application as an LLM generates output and create a better user experience. -

JavaScript is getting array grouping methods
Grouping items in an array is one of those things you’ve probably done a load of times. Each time you would have written a grouping function by hand or perhaps reached for lodash’s
groupByfunction. -

Node.js includes built-in support for .env files
With the recent release of version 20.6.0, Node.js now has built-in support for
.envfiles. You can now load environment variables from a.envfile intoprocess.envin your Node.js application completely dependency-free. -

Easy and accessible pagination links for your Astro collections
Generating pagination links is not as straightforward as it may seem. So, while rebuilding my own site with Astro, I released a
<Pagination />component on npm as @philnash/astro-pagination that anyone can use in their Astro site. Read on to find out more.
Subscribe
To keep up with posts on this blog, you can subscribe via RSS or follow me on DEV.