RunLLM is an AI support platform designed for technical teams focused on providing fast and accurate technical support solutions. It analyzes documents, code, and user feedback to generate accurate answers that help organizations reduce support workload and improve customer experience.Founded by a team of AI researchers at UC Berkeley, RunLLM combines Knowledge Graphs and customized Large Language Models (LLMs) to handle complex user queries. It supports multiple deployment methods such as Slack, Zendesk, and website embedding, making it suitable for businesses that need efficient technical support.RunLLM also learns continuously to optimize its knowledge base and reduce manual support costs.
Function List
- Precise Answer Generation: Provide accurate answers with citations based on documentation, code, and user feedback.
- Code verification and debugging: Automate code execution and validation, troubleshoot problems and provide solutions.
- Multi-Platform Deployment: Support for embedding AI assistants on Slack, Discord, Zendesk or websites.
- Knowledge base optimization: Analyze user queries, identify missing documents, and suggest improvements.
- multimodal support: Processes text, images and user-uploaded screenshots to generate comprehensive responses.
- Real-time learning: Supports instant training to correct incorrect responses and ensure errors are not repeated.
- data connector: Integrate multiple data sources and learn in-depth about product documentation and user interactions.
- Insights & Analysis: Provide topic modeling, documentation improvement suggestions, and weekly support data summaries.
Using Help
RunLLM is a powerful AI support platform for technical teams and enterprise users. Below is a detailed user guide to help users get started quickly and take full advantage of its features.
Installation and Deployment
RunLLM doesn't require complicated local installation, it is deployed in the cloud, and users just need to sign up for an account through the official website to start using it. Below are the deployment steps:
- Register for an account: Go to https://www.runllm.com, click on the "Sign Up" button and fill in your email address, username and password. Ensure that your username is not used to impersonate someone else and that you comply with RunLLM's User ID Policy. After registration, you will receive a confirmation email, click the link to activate your account.
- Configuring Data SourcesRunLLM Dashboard: After logging in, go to the RunLLM dashboard. On the Data Connectors page, upload product documentation, API descriptions, or support tickets; RunLLM supports a variety of formats such as PDF, Markdown, and code files. Users need to ensure that the uploaded documentation is clear and complete to improve the accuracy of the AI responses.
- Select Deployment Platform: In the Deployments tab of the dashboard, select the deployment method. For example:
- Slack integrationRunLLM generates a Slackbot that automatically responds to issues in the community or support channel.
- Website EmbeddingRunLLM will provide you with a piece of JavaScript code that you can copy into the HTML of your website to embed the chat widget. RunLLM will provide a JavaScript code that can be copied into the website HTML to embed the chat widget. Users can customize the location of the widget (e.g. bottom right corner) and the shortcut keys (e.g.
Mod+j
). - Zendesk or other platforms: Similarly, select Zendesk and enter the relevant API key to complete the authorization.
- Test Deployment: Once deployed, test the AI assistant on the target platform (e.g. Slack or website). Enter a simple question such as "How do I configure an API key?" , check the answer for accuracy.
Core Function Operation
The core functionality of RunLLM revolves around technical support and knowledge management. Below is the detailed operation flow of the main functions:
1. Accurate answer generation
RunLLM generates answers with citations by analyzing uploaded documents and code. Users type a question into a support channel (e.g., Slack) or website chat widget, and the AI scans the knowledge base, extracts relevant information, and generates an answer. For example, type "How do I debug Python FastAPI errors?" RunLLM will provide specific steps and cite relevant documentation. The answer will also include a description of the data source for further reference.
- procedure::
- Enter questions in the support channel to ensure they are clear and specific.
- Check the answers returned by AI for links to cited documents.
- If the answer is inaccurate, click the "Train" button to enter the correct answer and the AI will immediately learn and update its knowledge base.
2. Code verification and debugging
RunLLM automatically executes code and verifies its correctness, making it suitable for handling technical issues. For example, if a user asks "Why isn't my React component rendering?" AI analyzes the code snippet, performs a simulation run, points out potential errors and suggests fixes.
- procedure::
- Attach a code snippet to the question using the
代码块
Format. - The AI returns the results of the analysis with the cause of the error and the code to fix it.
- Users can copy the suggested code, test it and then give feedback on the results to further optimize the AI model.
- Attach a code snippet to the question using the
3. Multi-platform deployment
RunLLM supports flexible deployment and users can choose the platform according to their needs. The following is an example of website embedding:
- procedure::
- In the RunLLM dashboard, go to the "Config" page and select "Web Widget".
- Copy the code provided:
<script src="https://widget.runllm.com" runllm-assistant-id="1" async></script>
- Paste the code into the HTML of the website
<head>
Tagged in. - Customize widget parameters, such as setting
runllm-position="BOTTOM_RIGHT"
Adjust the position, orrunllm-keyboard-shortcut="Mod+j"
Enable shortcuts. - Save and refresh the site to check if the chat widget is displayed. Users can ask questions via the widget and the AI will respond in real time.
4. Optimization of the knowledge base
RunLLM can analyze user queries and identify missing content in documents. For example, if a user frequently asks about a feature, the AI will suggest additional documentation. Users can view the suggestions on the "Insights" page of the dashboard.
- procedure::
- Check the "Insights" page regularly for AI-generated maps of problem hotspots.
- Update documentation as suggested and re-upload to RunLLM.
- Test the effectiveness of the new document to ensure that the AI answers are more accurate.
5. Multimodal support
RunLLM supports processing images and screenshots. For example, if a user uploads a screenshot of an error log, the AI will analyze the content of the image and generate a solution in conjunction with the documentation.
- procedure::
- Click the "Upload" button in the chat widget and select the screenshot file.
- Enter a relevant question and the AI will synthesize and analyze the image and text.
- Check the answer to make sure it resolves the problem.
caveat
- data privacy: RunLLM complies with the Children's Online Privacy Protection Act (COPPA) and does not collect personal information from children under the age of 13. Users are required to ensure that uploaded data does not contain sensitive information.
- paid service: Some of the premium features require a subscription and payments are handled through Stripe. Users are required to provide a payment method and view billing details in the dashboard.
- Continuous optimization: RunLLM is updated regularly, so we recommend following the notifications on the official website or subscribing to the emails to ensure that you are using the latest version.
application scenario
- Technical Support Team
RunLLM helps technical support teams respond quickly to user queries and reduce manual processing time. For example, API development teams can integrate RunLLM into Zendesk to automatically answer frequently asked questions, allowing engineers to focus on complex tasks. - Open Source Community Management
Open source projects can deploy RunLLM to Discord or Slack to quickly respond to community issues. For example, the SkyPilot community uses RunLLM to provide accurate code debugging advice and improve user engagement. - Enterprise Customer Support
Organizations can embed RunLLM into their official website to provide instant guidance to new users. For example, Arize reduced 50%'s problem resolution time and improved 15%'s customer retention rate through RunLLM. - Internal knowledge management
Internal teams can use RunLLM to optimize documentation lookups. For example, engineers can query API documentation via Slackbot to get answers quickly and reduce training costs.
QA
- How does RunLLM ensure answer accuracy?
RunLLM analyzes uploaded documents and code to generate answers by combining knowledge graphs and customized big language models. Each answer comes with a citation of the data source so users can verify accuracy. If a response is incorrect, the user can instantly train the AI to ensure no further mistakes are made. - Is programming knowledge required to use RunLLM?
Not required.RunLLM provides an intuitive dashboard and pre-built data connectors, and regular users can simply upload documentation and select a deployment platform. For advanced features such as code debugging, a basic programming background is recommended to better understand the AI recommendations. - What languages does RunLLM support?
Currently RunLLM mainly supports English documentation and code analysis, but can handle user queries in other languages such as Chinese. Specific supported languages should refer to the latest documentation on the official website. - How is private data handled?
RunLLM processes payment data through Stripe and adheres to a strict privacy policy. Documents and code uploaded by users are only used to generate responses and are not used for any other purpose.