RoguePilot flaw let GitHub Copilot leak GITHUB_TOKEN, while new studies expose LLM side channels, ShadowLogic backdoors, and promptware risks.
Enter large language model (LLM) evaluation. The purpose of LLM evaluation is to analyze and refine GenAI outputs to improve their accuracy and reliability while avoiding bias. The evaluation process ...
Researchers from the University of Maryland, Lawrence Livermore, Columbia and TogetherAI have developed a training technique that triples LLM inference speed without auxiliary models or infrastructure ...
The company open sourced an 8-billion-parameter LLM, Steerling-8B, trained with a new architecture designed to make its ...
Exposed endpoints quietly expand attack surfaces across LLM infrastructure. Learn why endpoint privilege management is important to AI security.
As AI deployments scale and start to include packs of agents autonomously working in concert, organizations face a naturally amplified attack surface.
AI agents are powerful, but without a strong control plane and hard guardrails, they’re just one bad decision away from chaos.
Despite near-perfect exam scores, large language models falter when real people rely on them for medical advice, exposing a critical gap between AI knowledge and safe patient decision-making. Study: ...
The Seattle-based defense firm Overland AI Inc. has raised $100 million in new funding to help accelerate the use of robots and other autonomous systems across the US military’s ground forces. The ...
New z/XDC software expansion improves accessibility and integrates into existing workflows while welcoming a new generation of programmers into mainframing Izzi Software, owner of ColeSoft, the ...