Getting Started with Amazon Bedrock & Knowledge Bases – A Simple Way to Make Your Documents Chat-Ready

In the world of AI, there’s often a huge gap between theory and practice. You’ve got powerful models like Claude 4, Amazon Titan, or even GPT-4, but how do you actually use them to solve a real problem? That’s where Amazon Bedrock and its Knowledge Bases come in.

We recently tried this out ourselves—and the experience was surprisingly smooth.

Upload Documents, Sit Back

If you’ve ever worked with RAG (retrieval-augmented generation) pipelines, you know how much work is involved in setting up data preprocessing, embedding, vector storage, and retrieval logic.

With Amazon Bedrock Knowledge Bases, that heavy lifting is done for you. You just upload your documents (PDFs, Word files, etc.) to an S3 bucket, connect that bucket to Bedrock, and that’s it. Behind the scenes, Amazon handles: - Splitting the documents into chunks - Creating vector embeddings - Storing them in a vector database - Making everything queryable with natural language.

It’s like magic—except it’s real and running in your AWS account.

Test the Latest Models in Minutes

Once your documents are in the Knowledge Base, you can immediately start chatting with them. The AWS Console gives you a simple chat interface, so you can test different prompts and model behaviors without writing a single line of code. You can switch between models like: - Claude 3 or 4 from Anthropic - Titan Text Premier and Titan Multilingual Nova from Amazon - Or even third-party models from Cohere and Meta.

It’s a playground for prompt engineers and curious minds.

Try Claude 4, Amazon Nova, and More

The ability to try state-of-the-art models - especially ones like Claude 4 or Amazon’s multilingual offerings - without needing separate API keys or custom infrastructure is a huge win. You just select the model from a dropdown and start asking questions.

Amazon’s Nova model in particular shows promising results with non-English content (we tested it with Hungarian documents!), making it a solid choice for multilingual document handling.

Built-in Chat Assistant

For many use cases, you don’t even need to build your own frontend. Bedrock provides a built-in assistant interface directly in the console, letting you interact with your documents via chat.

It’s a great way to demo ideas or validate the usefulness of your content setup before integrating it into apps or workflows.

⚠ One Caveat: Watch Out for OpenSearch Costs

There’s one thing to keep in mind: by default, Bedrock stores embeddings in OpenSearch Serverless, which can become expensive quickly - especially for large document sets or frequent queries.

We’ll cover how to set up your own custom vector store (with OpenSearch, Qdrant, or others) in a future post. This gives you more control over cost and performance.

Final Thoughts

Amazon Bedrock and its Knowledge Base feature make it incredibly easy to build document-aware assistants - whether for HR documents, support manuals, or internal policies.

You don’t need a PhD or a full MLOps team to get started. And you don’t need to deal with infrastructure nightmares, either.

This is the kind of tech we love at kocka.news: practical, powerful, and surprisingly simple to try.

Stay tuned for the next post, where we’ll explore cost-optimized setups with a custom OpenSearch backend! 

Amazon Bedrock
Amazon Bedrock
Share this post
Apple's New AI Models Can Understand What’s on Your Screen
When we look at our phone's display, what we see feels obvious—icons, text, and buttons we’re used to. But how does artificial intelligence interpret that same interface? This question is at the heart of joint research between Apple and Finland’s Aalto University, resulting in a model called ILuvUI. This development isn’t just a technical milestone; it’s a major step toward enabling digital systems to truly understand how we use applications—and how they can assist us even more effectively.
Artificial Intelligence in the Service of Religion and the Occult
Imagine attending a religious service. The voice of the priest or rabbi is familiar, the message resonates deeply, and the sermon seems thoughtfully tailored to the lives of those present. Then it is revealed that neither the words nor the voice came from a human being—they were generated by artificial intelligence, trained on the speaker’s previous sermons. The surprise lies not only in the capabilities of the technology, but also in the realization that spirituality—so often viewed as timeless and intrinsically human—has found a new partner in the form of an algorithm. What does this shift mean for faith, religious communities, and our understanding of what it means to believe?
Will Artificial Intelligence Spell the End of Antivirus Software?
In professional discussions, the question of whether artificial intelligence (AI) could become a tool for cybercrime is increasingly gaining attention. While the media sometimes resorts to exaggerated claims, the reality is more nuanced and demands a balanced understanding.
What Kind of Browser Is OpenAI Developing – and Why Should We Pay Attention to It?
For decades, internet browsers have operated on the same basic principle: users type what they're looking for, then click through links and navigate between pages to find the desired information or service.
The Era of AI-Driven Startups
Startups have always thrived on rapid adaptation and the implementation of new ideas. In recent years, however, the emergence of artificial intelligence has fundamentally transformed the pace and strategy of these ventures. In a recent talk, world-renowned AI expert and AI Fund founder Andrew Ng discussed how businesses can harness AI to achieve lightning-fast execution and business success.
Switzerland’s New Language Model Shows How AI Can Truly Serve the Public Good
As artificial intelligence (AI) rapidly transforms scientific research, industry, and public services, growing concerns are emerging about its transparency, societal value, and accountability. Swiss researchers are responding to these concerns with a bold initiative: they have developed a fully open-source, publicly funded large language model (LLM), which they plan to release to the public this summer. The project is led by ETH Zurich, EPFL, and the Swiss National Supercomputing Centre (CSCS), with computing power provided by Alps—a supercomputer purpose-built for AI tasks.