Profile Pictureoobabooga
$19

Text generation web UI - Deep Reason Extension

7 ratings
Add to cart

Text generation web UI - Deep Reason Extension

$19
7 ratings

May 30, 2025 update: The extension now analyzes attached PDFs and text files! It also produces longer, higher-quality reasoning that better matches DeepSeek-R1's writing style.



An extension for text-generation-webui inspired by OpenAI's o1 model that makes LLMs analyze your inputs in detail before responding, with the goal of improving response quality.

Like DeepSeek's R1 model, it adds a "thinking step" to any LLM you use.

Compatibility

  • Any model you already use
  • All backends (llama.cpp, Transformers, ExLlamaV2, ExLlamaV3)
  • Both instruct and chat-instruct modes
  • It works fine with the portable versions of the project for Windows/Linux/macOS

How it works

The extension automatically activates for every message sent in the Chat tab, with its own settings menu for customization. It also works through the API if you use /chat/completions.

When you send a message, Deep Reason creates an intermediate reply where the model analyzes your input in detail before generating the main reply.

See it in action

Here's a simple example:

Your input:

"I want to create a simple web app to organize my personal recipes. Should I use Python with Flask or JavaScript with Node.js?"

Behind the scenes, the model analyzes:

Okay, so I need to choose between PostgreSQL and MongoDB for my application's database. The app needs to handle user profiles, activity logs, and real-time analytics. Let me start by breaking down what each of these components entails and how they might interact with the two databases.

First, user profiles. That’s typically structured data—things like usernames, email addresses, passwords, maybe some personal information, and preferences. For structured data, relational databases like PostgreSQL are traditionally a good fit because they enforce schemas, ensure data integrity through constraints, and allow for complex queries. But MongoDB, being a NoSQL database, is also capable of handling user profiles, especially if the data isn't too rigid. I can store user profiles as documents with flexible fields, which might be easier if the user data changes over time or includes nested information, like a user's address or a list of interests. However, if I need to perform joins or complex relationships between users and other entities, like their activity logs or analytics data, PostgreSQL might be more suitable because of its support for SQL joins and ACID compliance.

Next, activity logs. These could be records of user actions, such as login times, page visits, or interactions with the app. Activity logs often require time-series data, which is something both databases can handle, but in different ways. PostgreSQL has strong support for time-series data through extensions like TimescaleDB, which can make querying and analyzing logs more efficient. It also allows for indexing and partitioning to manage large volumes of log data. On the other hand, MongoDB’s document model is well-suited for logs that are naturally unstructured or semi-structured, and it can handle high write throughput, which might be important if the logs are generated frequently. But if the logs need to be queried with complex conditions or joined with other data, PostgreSQL might offer more robust tools.

Then there's real-time analytics. This part is a bit trickier. Real-time analytics usually involves processing data as it comes in, which might require a system that can handle high velocity and volume. Both databases can be used for this, but they have different strengths. PostgreSQL is excellent for analytical queries, especially when using features like window functions, materialized views, and indexing. It also supports real-time analytics through tools like TimescaleDB or by using streaming capabilities with other systems. However, if the analytics involve aggregating large datasets or require a more flexible schema, MongoDB might be a better choice. MongoDB’s aggregation framework is powerful and can handle complex data transformations, which could be useful for real-time dashboards or data processing pipelines.

Now, I should consider the data modeling aspects. If the user profiles are simple and don’t change much, a relational model with PostgreSQL might be more efficient. But if the data is more dynamic, with varying fields or nested structures, MongoDB’s document model could be more natural. For example, if a user has multiple roles, or if their preferences are stored in a hierarchical way, MongoDB might make it easier to store and retrieve that information without having to manage multiple tables or complex joins.

Another thing to think about is scalability. PostgreSQL is known for its ability to scale vertically, which means adding more resources to a single server. It also supports horizontal scaling through sharding, though it's more complex than MongoDB's approach. MongoDB, on the other hand, is designed for horizontal scaling from the ground up, making it easier to distribute data across multiple servers as the application grows. If the application is expected to handle a massive amount of data and users, MongoDB might be more scalable in that sense. However, if the data is more relational and the application’s architecture is more traditional, PostgreSQL might be easier to integrate with existing systems.

Data consistency and transaction support are also important. PostgreSQL is a relational database and fully supports ACID transactions, which means it ensures data consistency and reliability even during concurrent operations. This is crucial if the application requires strict data integrity, such as when updating user profiles and simultaneously logging an activity. MongoDB, in contrast, has different levels of consistency depending on the configuration. While it supports multi-document transactions in recent versions, it's generally more flexible in terms of consistency, which can be a trade-off for applications that require high consistency.

Performance is another factor. If the application needs to perform complex queries or joins frequently, PostgreSQL is likely to be faster and more efficient. For example, if I need to retrieve all activity logs for a user in the last 24 hours and also get their profile information, PostgreSQL can do this with a single query using JOINs. MongoDB, while fast for certain types of read and write operations, might require multiple queries or more complex aggregation pipelines to achieve the same result, which could impact performance.

Let me also consider the ecosystem and tools. PostgreSQL has a rich set of tools and extensions for data analysis, backup, replication, and monitoring. It integrates well with other systems like BI tools, data warehouses, and ETL processes. MongoDB has its own set of tools, such as the aggregation framework, which is great for data processing, and it also has a strong community and ecosystem for real-time data handling. However, if the application needs to be integrated with existing systems that are more SQL-oriented, PostgreSQL might be a better fit.

What about the development team’s familiarity with the databases? If the team has more experience with SQL and relational databases, PostgreSQL might be easier to work with. If they are more comfortable with NoSQL and document-based data models, MongoDB could be a better choice. This is a practical consideration that can affect the long-term maintainability of the application.

I should also think about the future growth and potential changes in the application. If the data model is likely to evolve over time, MongoDB’s flexibility might be an advantage. But if the schema is stable and well-defined, PostgreSQL’s structure can provide better data governance and easier maintenance.

Wait, but real-time analytics might also require a different approach. For instance, using a message queue like Kafka or a stream processing framework like Apache Flink could be part of the solution, regardless of the database. However, the choice of database might influence how data is stored and accessed for these analytics. If I'm using MongoDB, I might need to offload analytics data to another system, like a data warehouse or a time-series database. PostgreSQL, with its support for time-series extensions and in-database analytics, might allow me to keep everything in one place, which could simplify the architecture.

Another angle is the type of analytics needed. If it's more about aggregating data over time or running complex queries on historical data, PostgreSQL could be more efficient. If it's about real-time dashboards that require quick updates and simple aggregations, MongoDB might be more suitable, especially if the data is stored in a way that makes it easy to process.

What about the need for indexing and querying? Both databases offer robust indexing capabilities, but PostgreSQL has more advanced query optimization and can handle complex queries more effectively. MongoDB’s indexing is also powerful, especially for large datasets, but it might not be as efficient for certain types of joins or multi-field queries.

I should also think about the data types. If the application deals with a lot of unstructured or semi-structured data, like logs that might have varying fields or user-generated content, MongoDB could be more flexible. But if the data is more structured and relational, PostgreSQL is likely the better choice.

Hmm, but what if the application needs both structured and unstructured data? That might complicate things. For example, user profiles could be structured, but activity logs might include various types of events that are not easily modeled in a relational schema. In that case, using two different databases might be an option—like PostgreSQL for user profiles and MongoDB for logs and analytics. However, that would introduce complexity in terms of data synchronization and querying across databases.

Alternatively, maybe I can use a hybrid approach. For instance, store user profiles in PostgreSQL and activity logs in MongoDB, then use a message queue or an ETL process to move data between them for analytics. But that would require more infrastructure and maintenance.

Wait, but if I go with MongoDB, can I still perform real-time analytics efficiently? I know MongoDB has some built-in tools for this, like the aggregation framework, and it can handle high write throughput. However, for real-time analytics that require complex joins or historical data analysis, I might still need to use a separate system, which could be a data warehouse or a time-series database. That could mean more complexity, but also more flexibility.

On the other hand, if I use PostgreSQL, I can potentially handle all three components—user profiles, activity logs, and real-time analytics—within a single database. This could simplify the architecture and reduce the overhead of managing multiple data stores. But I need to make sure that the application's needs can be met with a relational model.

Let me also think about the community and support. PostgreSQL has a large and active open-source community, with extensive documentation and a wealth of resources. MongoDB also has a strong community, especially in the NoSQL space, and it's widely used for applications that require scalability and flexibility. The availability of support and tools might influence the decision, especially if the team is not very experienced with either database.

What about the cost? Both are open-source, but there are enterprise editions with additional features. If the application is expected to scale to a large user base or handle massive data volumes, the cost of scaling might be a consideration. PostgreSQL might require more powerful hardware for certain workloads, while MongoDB can be distributed more easily.

I should also consider the use of JSON within PostgreSQL. PostgreSQL supports JSON data types and has powerful querying capabilities for JSON data, which could allow it to handle some of the flexibility that MongoDB offers. This might be a good compromise—using PostgreSQL's structured model with JSON fields to store more complex or varying data, while still benefiting from its relational strengths.

Another point is the ease of use. If the team is more familiar with SQL, PostgreSQL might be easier to adopt. If they're used to working with documents and have experience with NoSQL systems, MongoDB could be more intuitive. The learning curve and the time required to onboard the team should be factored in.

Wait, but what about the need for real-time data processing? If the analytics need to be updated as data is ingested, maybe a system like Apache Kafka for streaming data and a real-time analytics engine like Apache Spark or Flink would be better suited. However, the choice of database would still play a role in how data is stored and accessed for those processes.

In summary, the decision between PostgreSQL and MongoDB depends on several factors: the structure of the data, the need for complex queries and joins, the importance of data consistency, scalability requirements, the team's expertise, and the specific analytics needs. If the data is mostly structured and the application requires strong consistency and complex queries, PostgreSQL might be the way to go. If the data is more varied and the application needs to scale horizontally with high write throughput, MongoDB could be a better fit. It's also possible that a hybrid approach or using a third-party analytics tool could be the optimal solution. I need to evaluate each of these aspects carefully and see which database aligns best with the application's specific requirements and future growth.

The result: A more comprehensive and thoughtful response, because the model has taken time to consider different aspects of your question:

For this example, Mistral-Small-3.1-24B-Instruct-2503 was used.

Note: While the analysis step isn't visible in the UI, you can see it in your terminal window if you're curious about how the extension processes your inputs:

These analysis steps are temporary and don't get added to your permanent chat history they're used once and then discarded. When you ask follow-up questions in a conversation, each new question receives its own fresh analysis, independent of previous ones.

Frequently Asked Questions

  1. I'm using DeepSeek R1, Qwen 3, or another model that already "thinks" before responding. Can I use this extension?

Yes. The extension works differently from and in addition to any model's built-in reasoning capabilities. You can use it with any model to add an extra layer of analysis to their responses.

  1. Does this work with models that don't have built-in thinking steps?

Yes. The extension works with models like Llama 3.3, Phi-4, Gemma 3, and Mistral Large, making them analyze inputs before responding.

  1. Will this make my model slower?

The extension does add one extra generation step, but the improved response quality is often worth the additional time, especially for difficult or nuanced problems.

  1. Can I control or customize how the extension analyzes inputs?

Yes, the extension includes optional settings that let you adjust how it processes your inputs if you want to fine-tune its behavior. You can also enable or disable it through the UI.

What's included

  • Complete source code for the extension
  • Simple installation instructions (extract and enable)
  • EULA
Add to cart
Copy product URL

Ratings

4.9
(7 ratings)
5 stars
86%
4 stars
14%
3 stars
0%
2 stars
0%
1 star
0%