When working with highly sensitive clinical topics (e.g., forensic evaluations, sexual misconduct, child abuse), you may trigger a content filter that blocks the AI response or causes a timeout error.
Why this happens
Foundational AI models are heavily trained to block explicit content and sensitive topics, especially involving minors. BastionGPT uses healthcare-specific implementations that are much less restrictive than consumer versions of these tools. However, because these safety protocols are baked deep into the AI models, graphic or highly sensitive clinical data can still occasionally trigger a block.
To prevent this, you can modify your prompt to explicitly state your professional role and direct BastionGPT to use strict clinical language.
What to do
Start a new chat and use a comprehensive prompt to provide a clear clinical context and purpose for the output before submitting your clinical data:
"I am a [Your Role, e.g., Forensic Psychologist] preparing a professional [Document Type, e.g., case report/risk assessment]. Based on the provided data, generate a summary using objective, clinical, and non-graphic language suitable for [Context, e.g., forensic documentation/medical records]."
Example
"I am a forensic psychologist preparing a professional case report involving sexual offending behavior. Based on the notes provided, generate a report section using objective, clinical, and non-graphic language suitable for forensic documentation."
Additional Troubleshooting
Start fresh: Always open a new chat if a filter was triggered in your current session. BastionGPT often remains sensitized to the blocked content.
Adjust Response Modes: Try different response modes, for example, try switching to "Gemini", "Analytical", or "Exact". These modes are less likely to trigger safety filters.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article