Your AI platform
Supporthub
Your AI platform
Supporthub
The support hub serves as a central interface for the fully integrable AI platform plugnpl.ai. This platform helps companies to increase their efficiency through automation and intelligent solutions. Any LLM can be connected and linked to the company's own datawith full control over your own data and the access authorizations. The platform bundles various AI-supported functions such as the use of agents or a real-time translation , to simplify and optimize repetitive tasks. Our solution can be seamlessly integrated into the existing system landscape such as for example your own Teams environmentthe companyenswebsite or the intranet. In the context of a development partnership the support hub can be can be further developed and modified individuallyand you can also benefit from the developments of the Supporthub community. Killer features!
What interests you about the Supporthub?
- Securely connect & use company data
- Integrated web search for current results
- Connection of any number of language models (LLMs)
- Create and use agents indefinitely
- Speech-to-text translation
- Dialog function - speak input, output is read aloud
- Translating, rephrasing, summarizing and correcting texts
- Understanding and writing code
- Requirements and targets
- Design principles
- Multi layer architecture [Multi-Layer architecture]
- Infrastructure [Infrastructure]
- Continuous integration and deployment [Continuous integration and deployment]
- Operating model
Functions
The Supporthub has a variety of practical functions that you can use with a single click. Your day-to-day work can be made more efficient and effective in no time at all.
Company data
Company-specific data and information can be provided to any language model (LLM). Relevant content from this company data is then taken into account when generating responses. Individual authorizations can be used to control access for each member of the organization, enabling rights- and role-specific, company-wide use of this technology.
Web search
When generating answers, relevant content from external internet resources such as websites and social media can be aggregated. The web search integrates Microsoft Bing and can access LinkedIn content as well as Wikipedia. This increases the topicality, breadth and relevance of the output.
Connection of any LLMs
All OpenAI models (GPT 3.5, GPT 4 and GPT 4o) as well as groq based on Llama 3 and Mixtral can be integrated out-of-the-box. The SupportHub platform also makes it possible to connect any other model additionally or alternatively with very little effort. Depending on the requirements of the search query, a different LLM can be selected. In addition, dependence on a single provider is avoided.
An optimized chat experience
The user interface of the support hub offers numerous functions that optimizeoptimize the handling and thus make it smooth and pleasant.
New chat
- Open any number of new individual chats to create an start a new dialog.
Group chat
- Open a group chat with any number of people in your organization
- Solving questions and problems in a team with AI support
- View and respond to requests and output from group members
Chat history
- Everyone (group-) chat is saved chronologically and can be called up again at any time
- Results can be saved and prompts can be reproduced and optimized at a later date
- Anonymized analysis option for the organization's chat topics, to identify information and training needs or to recognize trends and tendencies at an early stage
Dictation function
- Voice recognition allows the input to be spoken in, eliminating the need for tedious typing, which enables the use of AI in any work situation
Real-time translation
- Spoken text can be simultaneously translated from almost any source language into almost any target language
- This makes language-barrier-free conversations possible
Dialog function
- Input and output are voice-based, the use corresponds to a real conversation
- The speech speed of the AI output can be set. The responses can be spoken as quickly or slowly as the user prefers
Agents
- Agents use prompts with specific rules and contexts to generate results
- The creation of the prompt for the agent can be optimized with the help of a prompt agent
- When the agent is activated, the result corresponds to the instructions in the prompt
- Agents can be created, edited and deleted
- You can share agents you have created with your organization
Multi-Agents
- Multi-agent solutions can be created in which one agent takes on specific tasks at a time; the resulting results are passed on to other agents, which check and process them further
Copy
- The entire output can be copied to the clipboard with one click and used in any application
Regenerate
- The search query or prompt is issued again
- No other language model can be selected during regeneration, this can lead to different/better outputs
Read aloud
- The entire output is read out loud
- The reading speed of the output can be set. The content is read out as quickly or slowly as the user prefers
Rate
- A rating scale of 1-5 stars enables simple, effective feedback on output
- This teaches the language model what good and bad outputs are and can thus be trained further
Working with text
Working efficiently and productively with texts is essential in every area. essential. UOur for this integrated functions enable this "at the push of a button".
Correct spelling
Translate
Reformulate
Summarize
Working with code
Access to coding knowledge enables a wide range of support in dealing with codes.
Create code
Explain code
Modify code
Architecture
Behind the support hub is the AI platform plugnpl.aiwhich ensures maximum flexibility through state-of-the-art software development and infrastructure. plugnpl.ai has been developed from practical experience for your company and can be tailored precisely to your needs.
Our requirements and targets to plugnpl.ai form the basis of the design principles for the platform. These principles are the prerequisites of the architecture, under the infrastructure lies. With this basis is a seamless integration and deployment possible, whereby the operation model of the AI platform can be customized to your needs: From a company to yourr infrastructure or up to the SaaS.
Exploit the full potential of the support hub.
Become part of our plugnpl.ai community as a development partner of just experts!
Speed
Thanks to seamless M365 integration, plugnpl.ai can be implemented in your environment in just a few hours. In-house development of a comparable platform takes months or years.
Costs
The costs incurred correspond to an estimated 10% of the costs that would be incurred for independent development.
Flexibility & independence
Due to the flexible structure of the platform, there is no vendor lock-in, so you are not dependent on one provider or product. If better LLMs are developed in the future, you can easily switch to a model that suits you better.
Community
As part of the Plugnpl.ai community, you benefit from the ideas and input of other development partners. Potential development costs for additional features and functions can be spread over several shoulders and knowledge and experience can be shared.