Dev Zone

Using custom AI models with Ollama and OutSystems

rodrigo coutinho
hero-bp-application-development-best-practice

AI is transforming how applications interact with users, providing powerful features like summarization, sentiment analysis, and automation. However, integrating AI into an application often raises concerns about data privacy and compliance, especially when public models are involved. The solution? Deploying a custom AI model tailored to your specific needs.

In this post, I’ll walk through the process of integrating a custom AI model into an OutSystems application using Ollama. I’ll show how I set up the model, exposed it to the internet, created a proxy for seamless integration, and ultimately built an application that provides AI-powered summarization.

Step 1: Setting up a custom model with Ollama

My first step was setting up and configuring Ollama to run my custom AI model locally.

1. Install Ollama: If you’re using macOS, run:

brew install ollama

2. Pull the model: For testing, I downloaded and ran a small model. In this case, it was Llama 3.2:

ollama pull llama-3.2

3. Run the model:

ollama serve

4. This started a local server where I could send requests to the model.

Step 2: Exposing the model to the internet

I used Cloudflare to create a secure tunnel that allows an external application to access my locally hosted model.

1. Install Cloudflare CLI:

brew install cloudflare

2. Create a tunnel:

cloudflared tunnel --url http://localhost:11434

This exposed the local Ollama server to a public-facing URL.

Step 3: Creating a proxy for AI agent integration

OutSystems AI Agent Builder expects a specific API format, which may not align with Ollama’s API structure. To bridge this gap, I built a lightweight proxy application in OutSystems:

  1. Define input and output structures: I used the OutSystems JSON import tool to create the correct data structure.
  2. Expose a REST API: The proxy exposed an endpoint (/chat/completions) that formats requests correctly before forwarding them to Ollama.
  3. Consume the Ollama API: I configured OutSystems to call the model via the Cloudflare tunnel, ensuring proper request formatting.

Step 4: Building the AI Summarization Agent

With my proxy in place, I created an AI agent in OutSystems AI Agent Builder:

  1. Define the AI agent: I configured it to use my custom model for summarization.
  2. Provide example inputs: I helped the model understand the expected input format and output structure.
  3. Test the agent: I provided a sample transcript and validated that the model correctly summarized the content.

Step 5: Creating the meeting tracker application

To demonstrate how my agent can be used in the real world, I developed an application in OutSystems Mentor to manage meeting transcripts and summaries.

  1. Generate a meeting tracker app: Using OutSystems Mentor, I created an app with meeting details, transcripts, and summary fields.
  2. Enhance the UI: I modified the transcript field for better input support and added a “Summarize” button.
  3. Connect to AI agent: I configured the button to call my AI summarization agent when clicked.
  4. Handle responses: I ensured the AI-generated summary was stored and displayed in the application.

Step 6: Debugging and optimization

While testing, I encountered issues such as:

  • 403 Errors: I fixed these by setting the correct permissions for the Ollama API.
  • Timeouts: To address this issue, I set the client-server timeout to 30 seconds to accommodate processing delays.
  • Streaming issues: I ensured that responses were sent in a single chunk instead of streamed word-by-word.

The final result: A running meeting tracker!

By following these steps, I successfully integrated a custom AI model into an OutSystems application. The final result? A running meeting tracker that allows users to input a transcript and generate a concise summary with action items—all powered by a locally hosted AI model.

This project highlights the power of combining AI and low-code development. Whether you're working with sensitive data or need a specialized AI feature, deploying a custom model gives you full control over performance, privacy, and compliance.

Stay tuned for more exciting OutSystems AI projects!

Want to try it yourself?

Watch the video with all the project details, and let us know how it works for you! If you have any questions or improvements, drop them in the OutSystems Community.