BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Silicon Valley Engineering Council - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://svec.org
X-WR-CALDESC:Events for Silicon Valley Engineering Council
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:-07:00
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=-07:00:20260420T183000
DTEND;TZID=-07:00:20260420T203000
DTSTAMP:20260416T200241
CREATED:20260310T180308Z
LAST-MODIFIED:20260310T180308Z
UID:77852-1776709800-1776717000@svec.org
SUMMARY:Labelled Deductive Systems for Logical Validation of LLM Output
DESCRIPTION:**TALK LOGISTICS:**\nMonday\, April 20\, 2026\n6:30 registration\, food\, networking.\n7:00 SFbayACM upcoming events\, introduce the speaker\n7:10 to 8:15 or 8:30 presentation\nThe Zoom and YouTube links will be provided here about 2-3 days before the event\nSFbayACM will support a local audience at VRP in Mountain View \n**TALK DESCRIPTION:**\nLarge language models (LLMs) often generate fluent but in-correct or unsupported statements\, commonly referred to as hallucinations. We propose a hallucination detection frame-work ValidLLP4LLM based on a Labeled Logic Program (LLP) architecture that integrates multiple reasoning paradigms\, including logic programming\, argumentation\, probabilistic inference\, and abductive explanation. By enriching symbolic rules with semantic\, epistemic\, and contextual labels and applying discourse-aware weighting\, the system prioritizes nucleus claims over peripheral statements during verification. Experiments on three benchmark datasets and a challenging clinical narrative dataset show that LLP consistently outperforms classical symbolic validators\, achieving the highest detection accuracy when combined with discourse modeling. A human evaluation further demonstrates that logic-assisted explanations improve both hallucination detection ac-curacy and user trust. The results suggest that labeled symbolic reasoning with discourse awareness provides a robust and interpretable approach to LLM verification in safety-critical domains. \n**SPEAKER BIO:**\nProf. Boris Galitsky has contributed linguistic and machine learning technologies to Silicon Valley startups as well as companies like eBay and Oracle for over 25 years. His information extraction and sentiment\nanalysis techniques assisted several acquisitions\, such as Xoopit by Yahoo\, Uptake by Groupon\, Loglogic by Tibco\, and Zvents by eBay. His security-related technologies of document analysis contributed to the acquisition of Elastica by Semantec. \nAs an architect of the Intelligent Bots project at Oracle\, he developed a discourse analysis technique used for dialogue management and published in the book Developing Enterprise Chatbots. He also published a two-volume monograph\, “AI for CRM”\, based on his experience developing Oracle Digital Assistant. He is an Apache committer to OpenNLP\, where he created OpenNLP. Similarity component\, which is a basis for a semantically enriched search engine and chatbot development. \nHis exploration and formalization of human reasoning culminated in the book AQ1 Computational Autism broadly used by parents of children with autism spectrum disorder and rehabilitation personnel. His\nfocus on the medical domain led to another research monograph\, “Artificial Intelligence for Healthcare Applications and Management”. He is a now a lead researcher at Moscow Institute for Physics and Technology\, Russia\nhttps://www.linkedin.com/in/boris-galitsky-342109204/ \n#ACM #SFbayACM #AI #GenAI #DataScience #LLM\n#Hallucinations #LogicProgramming #NLP #Discourse #Benchmark
URL:https://svec.org/event/labelled-deductive-systems-for-logical-validation-of-llm-output/
LOCATION:Valley Research Park\, 319 N Bernardo Ave\, Mountain View\, CA\, 94043\, United States
ATTACH;FMTTYPE=image/jpeg:https://svec.org/wp-content/uploads/2026/03/1024x576-bCQPfF.jpg
END:VEVENT
END:VCALENDAR