The National Institute of Standards and Technology (NIST) wants feedback on a newly developed chatbot for its National Cybersecurity Center of Excellence (NCCoE) to streamline the cybersecurity guidelines that it creates. 

“The NCCoE identified a potential application for a chatbot to support its mission and developed a secure, internal-use chatbot to assist NCCoE staff with discovering and summarizing cybersecurity guidelines tailored to specific audiences or use cases,” said NIST in a June 18 announcement. 

The chatbot was built using retrieval-augmented generation (RAG)-based large language model (LLM) technology to generate what NIST says are “more focused, contextually relevant responses” that use “a repository of cybersecurity knowledge including previous NCCoE publications.” 

Public comment on the NIST report for its chatbot is available until August 4 at midnight. 

Specific features of the chatbot include page number citations on answers generated by the tool, which NIST said “enables users to explore the original publications for a deeper understanding.” 

This also improves the readability of cybersecurity guidelines and best practices produced by NCCoE. 

“NCCoE publications contain large amounts of cybersecurity guidelines as well as precise details for their implementation,” said NIST. “Using a RAG in this way will increase the searchability of cybersecurity guidelines for both NCCoE researchers as well as the wider community.” 

The NCCoEchatbot runs on a powerful Nvidia computer with four high-end graphics processors, using open-source tools to support advanced language models, and is available to engineers through a secure lab network with traffic managed for reliable access. 

Cybersecurity protocols used by the chatbot include “measures to mitigate potential vulnerabilities such as prompt injections, which can manipulate the chatbot into generating unintended responses.” 

“To counteract this, the chatbot is designed with robust input validation and filtering mechanisms to provide guardrails around user inputs and responses,” said NIST. “These precautions help maintain the integrity and reliability of the chatbot’s interactions, safeguarding against potential misuse or exploitation.” 

Read More About
Recent
More Topics
About
Weslan Hansen
Weslan Hansen is a MeriTalk Staff Reporter covering the intersection of government and technology.
Tags