Back to Datalithix

Quarkus AI Text Processor with LangChain4j & Groq

LangChain4j Basics 30 min read
What you'll learn: Build a high-performance Quarkus REST API that uses LangChain4j and Groq's Llama 3 model for dynamic text processing — summarization, translation, and more — via a single endpoint.

Prerequisites

  1. Java 17+ installed.
  2. Groq API Key — obtain one at console.groq.com.

Steps

  1. Clone the repository:
    git clone https://github.com/mehedicoder/langchain4j-quarkus.git
    cd langchain4j-quarkus
  2. Set your Groq API key as an environment variable:
    # Linux/macOS
    export GROQ_API_KEY=your_gsk_api_key_here
    
    # Windows (PowerShell)
    $env:GROQ_API_KEY="your_gsk_api_key_here"
  3. Review the tech stack: The project uses Quarkus 3.15.1, LangChain4j (Quarkus Extension), and the llama-3.3-70b-versatile model via Groq's OpenAI-compatible API.
  4. Understand the architecture: AI services are declared via Java interfaces using @RegisterAiService — no boilerplate prompt handling required.
  5. Build and run:
    ./mvnw quarkus:dev
    The app starts on the default Quarkus port localhost:8080.
  6. Send a request to the REST endpoint:
    curl "http://localhost:8080/api/process?task=summarize&text=Your+text+here"
    Change the task query parameter to switch between summarization, translation, and other AI tasks.
  7. Run tests:
    ./mvnw test
    Tests use REST-Assured and JUnit 5.

Next steps: Explore the @RegisterAiService interface to understand declarative AI services in LangChain4j. Try swapping the Groq model for another provider by adjusting the LangChain4j configuration.

View source on GitHub →

React Frontend with LangChain4j AI Backend

LangChain4j Basics 25 min read
What you'll learn: Set up a React frontend connected to a LangChain4j-powered Java backend with streaming AI responses for cybersecurity consulting and document intelligence.

Prerequisites

  1. Java 17+ (uses Text Blocks and Records).
  2. Gradle 7+ or Maven as build tool.
  3. Groq API Key set as GROQ_API_KEY environment variable.

Steps

  1. Clone the repository:
    git clone https://github.com/mehedicoder/react_langchain4j.git
    cd react_langchain4j
  2. Configure the Groq API key:
    # Linux/macOS
    export GROQ_API_KEY="your_gsk_key_here"
    
    # Windows (PowerShell)
    $env:GROQ_API_KEY="your_gsk_key_here"
    
    # Windows (CMD)
    set GROQ_API_KEY=your_gsk_key_here
  3. Explore the three applications included:
    • ITAssistant — Terminal-based AI security consultant with streaming and chat memory.
    • ITAssistantHumanLike — Same as above with simulated human typing (Teletype effect).
    • TextSummarizer — Multithreaded text summarizer supporting 10+ summarization levels.
  4. Place text files for summarization: Put .txt files in src/main/resources/ before running the TextSummarizer.
  5. Build and run with Gradle:
    ./gradlew run
    Or run individual classes directly from your IDE.
  6. Interact with the chatbots: Type questions at the Ask> prompt. After each response the system displays TTFT (Time to First Token) and Total Latency metrics. Type exit or quit to end.
  7. Use the Text Summarizer: Run TextSummarizer.java, then follow the prompts — enter filename, summarization level (e.g., executive), and language.

Summary Level Reference

LevelGoalOutput Style
ExtractiveAccuracyVerbatim sentences
ExecutiveActionDecision-oriented bullets
ThematicUnderstandingCore concepts & motifs
AnalyticalInsightInterprets "why" and "how"
Ultra-briefSpeedSingle high-impact sentence
Note: The repository README does not detail the React frontend integration steps. Check the source code and build.gradle for the React build configuration.

Next steps: Review the build.gradle for React integration details. Explore the @SystemMessage annotations to customize the AI persona.

View source on GitHub →

AI-Powered Invoice Verification with LangGraph4j

Architecture 20 min read
What you'll learn: Build an automated invoice processing system using langgraph4j with a hybrid "Human-in-the-Loop" architecture — low-value invoices auto-approved, high-value routed for manual review.

Prerequisites

  1. Java 17+ and Maven.
  2. langgraph4j dependency (included in pom.xml).

Steps

  1. Clone the repository:
    git clone https://github.com/mehedicoder/invoice-verifia.git
    cd invoice-verifia
  2. Review the Maven configuration: Open pom.xml to understand dependencies including langgraph4j and any LLM provider connectors.
  3. Understand the architecture: The system implements a multi-agent workflow:
    • Extractor — Reads invoice data and detects amounts.
    • Router — Evaluates invoice value thresholds; low-value invoices go to the autonomous agent, high-value invoices trigger human authorization.
    • Persistence — Logs all approvals (automatic and manual).
    • Responder — Finalizes and outputs invoice status.
  4. Build the project:
    ./mvnw clean install
  5. Run the application:
    ./mvnw exec:java
    Observe the streaming output as invoices are processed.
  6. Interact with the Human-in-the-Loop: For high-value invoices you'll see a prompt:
    Approve [A] or Reject [R]?
    Enter A to approve or R to reject. The system logs manual authorization.

Example Output

[Extractor] Detected: $45.00
[Persistence] Logged APPROVED by Agent
[Responder] Finalized Invoice: Status=APPROVED, Total=$45.00

[Extractor] Detected: $150.00
Approve [A] or Reject [R]? A
[Persistence] MANUAL APPROVAL LOGGED: $150.00 authorized by Human.
Note: The README does not include explicit environment variable or LLM API key configuration. Check pom.xml and source code for the specific LLM provider setup.

Next steps: Customize the invoice value threshold for routing. Extend the agent graph to add OCR integration or email notification steps.

View source on GitHub →

LangChain4j AI Playground — Chatbots & Text Summarizer

LangChain4j Basics 25 min read
What you'll learn: Run three LLM-powered CLI tools — an IT security chatbot with streaming, a human-like assistant with teletype effect, and a multithreaded text summarizer — all using LangChain4j and Groq's Llama 3.

Prerequisites

  1. Java 17+ (uses Text Blocks and Records).
  2. Gradle 7+ or Maven.
  3. Groq API Key set as GROQ_API_KEY.

Steps

  1. Clone the repository:
    git clone https://github.com/mehedicoder/lanchain4j_playground.git
    cd lanchain4j_playground
  2. Set the Groq API key:
    export GROQ_API_KEY="your_gsk_key_here"
  3. Prepare text files (for summarizer): Place .txt files in src/main/resources/.
  4. Run the IT Guru Chatbot:
    java org.assistant.ITAssistant
    Type security questions at Ask>. The chatbot retains 20 messages of context and streams responses. Metrics (TTFT, Total Latency) appear after each answer.
  5. Run the Human-Like Assistant:
    java org.assistant.ITAssistantHumanLike
    Same as above but responses appear with simulated human typing speed via CompletableFuture.
  6. Run the Text Summarizer:
    java org.assistant.TextSummarizer
    Follow the interactive prompts: enter filename, summarization level (e.g., executive), and output language. Multiple files are processed concurrently using FixedThreadPool.
  7. Type exit or quit to end any chatbot session.

Technical Architecture

Next steps: Create your own AI agent by defining a new interface with @RegisterAiService and @SystemMessage. Experiment with different Groq models for varied latency/cost tradeoffs.

View source on GitHub →

Document Intelligence — Local RAG Knowledge Base

Architecture 45 min read
What you'll learn: Build a 100% private, local-first AI document assistant using Retrieval-Augmented Generation (RAG) with Ollama embeddings and Groq inference — no cloud data uploads required.

Prerequisites

  1. Java 21+ (required for Virtual Threads / Project Loom).
  2. Ollama installed and running on localhost:11434.
  3. Groq API Key set as GROQ_API_KEY environment variable.

Steps

  1. Install and start Ollama: Download from ollama.ai and ensure it's running on the default port 11434.
  2. Pull the embedding model:
    ollama pull nomic-embed-text
  3. Clone the repository:
    git clone https://github.com/mehedicoder/document-intelligence.git
    cd document-intelligence
  4. Set the Groq API key:
    export GROQ_API_KEY="your_gsk_key_here"
  5. Prepare your documents: Place files in a local folder. Supported formats:
    • Documents: .pdf, .docx
    • Technical: .md, .markdown, .txt
    • Data: .csv, .json
  6. Build and run:
    ./mvnw clean install
    ./mvnw exec:java
    The application indexes your documents using Ollama's nomic-embed-text embeddings — all locally, no cloud uploads.
  7. Ask questions: The CLI provides a "thinking" indicator followed by streaming answers. Every response includes a source citation (e.g., (Source: roadmap_2026.pdf)).
  8. Ask follow-up questions: The system maintains conversation context, so you can refine queries without repeating information.

Key Features

Next steps: Add more document types by extending the file parser. Experiment with different Ollama embedding models for improved retrieval quality.

View source on GitHub →