Another week of building and innovating Storytell for you, users! Explore our latest engineering demos, featuring error fixes, SmartChat™ question library updates and more backend improvements. We're always open to hearing your feedback or suggestions so, keep them coming!
1.Our engineering team conducted in-depth analysis to identify and address the most frequent errors encountered by users. Following our investigation, we've implemented fixes to prevent these errors from occurring again. This ensures a smoother and more efficient experience while using Storytell.
Please note that in certain parts of the videos, we have intentionally blurred out Ryan's screen to ensure the privacy of our codes.
2. Now, you have the flexibility to edit, delete, or pin your saved questions in your Question Library! This provides greater control over your SmartChat™ experience, tailoring it to your specific needs. We've also included a selection of prompts commonly used by our users.
Pro-tip: Take advantage of the "Up" and "Down" arrows on your keyboard to easily navigate to your previously asked questions!
3. Our team demonstrated ChatGPT's code interpreter capabilities, enabling code writing, execution, and responses based on output. This streamlined approach efficiently interpreted CSV files, providing accurate responses with real data. This proves that choosing the right technology sped up the feedback loop, enhancing overall efficiency. We're working on integrating this into Storytell in the near future so, stay tuned, users!
4. Engineering team has significantly refactored the backend logic for prompts. The context for this change lies in the numerous use cases for the chat functionality, such as chatting with a tag, pure LLM, or the 'All My Knowledge' feature, among others. Previously, the logic was scattered and challenging to modify for different use cases. This logic was consolidated into an LLM use case config object, making it easier to add new use cases and modify existing ones.
Please note that in certain parts of the videos, we have intentionally blurred out Parker's screen to ensure the privacy of our codes.
5. Full report content level for Tags and Tag Groups enhances the context sent to OpenAI when a chat message is sent. Previously, only the most relevant sentences were pulled for the report or tag. However, this could lead to missing context when asking broad questions to write a summary for an entire tag or report. The new update allows the entire source content to be pulled as context for tags or tag groups with fewer than 10 files, as long as it fits within the context window. See it in action in the demo video below:
Status: These changes are live in our staging environment and will be pushed to production by EOD on our next release date: Weds, Nov 15th