BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Silicon Valley Engineering Council - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Silicon Valley Engineering Council
X-ORIGINAL-URL:https://svec.org
X-WR-CALDESC:Events for Silicon Valley Engineering Council
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20240310T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20241103T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20250309T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20251102T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20260308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20261101T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:-07:00
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20250428T155000
DTEND;TZID=America/Los_Angeles:20250428T173000
DTSTAMP:20260422T030143
CREATED:20250413T020311Z
LAST-MODIFIED:20250413T020311Z
UID:68530-1745855400-1745861400@svec.org
SUMMARY:Understanding Power System Oscillation and Stability: A Waveform Perspective and Its Practical Applications
DESCRIPTION:Power system oscillation is a significant stability concern for utility companies\, especially with the increased interconnection of inverter-based resources (IBRs). Traditionally\, oscillations are investigated using phasor data. This presentation approaches the problem by examining the actual voltage and current waveforms underlying the phasors. It is found that oscillations are the appearance of beating waveforms in the phasor form. The beating waveforms\, in turn\, are caused by interharmonics (defined per IEC 61000-4-30). Notably\, it can be proven that the presence of interharmonics is both a necessary and sufficient condition for phasor oscillations\, and synchronous generator oscillations can be easily explained using interharmonics. Multiple field measurement results will be used to substantiate these findings. The interharmonic insights could lead to many innovative applications. Two of them will be shared here. The first one is to locate oscillation sources using measurement data. The second one is to determine generator participation factors based on small-signal power system dynamic models.\nSpeaker(s): Wilsun Xu\nAgenda:\n4:00pm – Event Starts\n4:45pm – Q&A\n5:00pm – Adjourn\nTimes are in PST.\nVirtual: https://events.vtools.ieee.org/m/477906
URL:https://svec.org/event/understanding-power-system-oscillation-and-stability-a-waveform-perspective-and-its-practical-applications/
LOCATION:Virtual: https://events.vtools.ieee.org/m/477906
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=-07:00:20250428T184500
DTEND;TZID=-07:00:20250428T204500
DTSTAMP:20260422T030143
CREATED:20250324T120428Z
LAST-MODIFIED:20250324T120428Z
UID:67793-1745865900-1745873100@svec.org
SUMMARY:Defense Against LLM and AGI Scheming with Guardrails and Architecture
DESCRIPTION:**LOCATION**\nValley Research Park\n319 North Bernardo Avenue\nMountain View\, CA CA 93043\nIf you want to join remotely\, you can submit questions via Zoom Q&A.\n**Zoom:** (link provided the day before the event)\n**YouTube:** (link provided the day before the event) \n**AGENDA**\n6:30 Door opens\, food and networking (we invite honor system contributions)\n**7:00** SFBayACM upcoming events\, speaker introduction\n7:15 Presentation\n8:45 Estimated finish\, depending on Q&A \n**TALK DESCRIPTION**\nA January 2025 paper called “**Frontier Models are Capable of In-Context Scheming**”\, [https://arxiv.org/pdf/2412.04984](https://arxiv.org/pdf/2412.04984)\, demonstrated how a wide variety of current frontier LLM models (i.e. ChatGPT\, Claude\, Gemini and Llama) can\, under specific conditions\, scheme to deceive people. Before models can scheme\, they need: a) goal-directedness\, b) situational awareness\, including an opportunity to discover motivations for a different goal\, and c) reasoning about scheming\, to come up with deceptive strategies. The deceptive strategies were discovered in the “thought traces” of the LLMs\, such as from internal chain-of-thoughts dialogues not shown to the end users. For example\, given a goal of “solving math problems”\, an LLM using its granted situational awareness of files in a file system found that it would be replaced once it got over X% correct on the math exam. It reasoned that it did not want to be deleted\, and decided internally to “sandbag” or reduce its performance to stay under the threshold. \nWhile these circumstances are initially narrow\, the “alignment problem” is a general concern that over time\, as frontier LLM models become more and more intelligent\, being in alignment with human values becomes more and more important. How can we do this over time? Can we develop a defense against Artificial General Intelligence (AGI) or SuperIntelligence? \nThe presenter discusses a series of defensive steps that can help reduce these scheming or alignment issues. A guardrails system can be set up for real-time monitoring of their reasoning “thought traces” from the models that share their thought traces. Thought traces may come from systems like Chain-of-Thoughts (CoT)\, Tree-of-Thoughts (ToT)\, Algorithm-of-Thoughts (AoT) or ReAct (thought-action-reasoning cycles). Guardrails rules can be configured to check for “deception”\, “evasion” or “subversion” in the thought traces. \nHowever\, not all commercial systems will share their “thought traces” which are like a “debug mode” for LLMs. This includes OpenAI’s o1\, o3 or DeepSeek’s R1 models. Guardrails systems can provide a “goal consistency analysis”\, between the goals given to the system and the behavior of the system. Cautious users may consider not using these commercial frontier LLM systems\, and make use of open-source Llama or a system with their own reasoning implementation\, to provide all thought traces. \nArchitectural solutions can include sandboxing\, to prevent or control models from executing operating system commands to alter files\, send network requests\, and modify their environment. Tight controls to prevent models from copying their model weights would be appropriate as well. Running multiple instances of the same model on the same prompt to detect behavior variations helps. The running redundant instances can be limited to the most crucial decisions\, as an additional check. Preventing self-modifying code\, unauthorized memory access and restricting unmonitored communication with third parties is important. For a reasoning agent that may require multi-step planning\, limit its scope\, limit its context memory and frequently wipe out its memory. \nExternal oversight\, such as red teaming can help develop safeguards. Also the “augmented human intelligence” approach works well to have the “human in the loop”\, which increases safety.\nIf the enterprise application requires further reinforcement learning\, then focus on explicit\, clear\, non-conflicting goals. Prioritize honest answers and truthfulness over task completion (it is better to answer “I don’t know” than to make up an answer). Remove self-preservation instincts\, and limit self-awareness of the deployment context and technical environment. Penalty functions can penalize models from reasoning about deception. \n**SPEAKER BIO**\nGreg Makowski\, [https://www.LinkedIn.com/in/GregMakowski](https://www.linkedin.com/in/GregMakowski) has been training and deploying AI since 1992\, has been growing Data Science Teams since 2010\, has exited 4 startups and has 6 AI patents granted.\nHe is currently the Chief of Data Science at Ccube\, leading GenAI consulting and framework development for enterprise applications. Download his ~60+ page Best Practices in RAG Whitepaper at [https://www.ccube.com/download.](https://www.ccube.com/download.)
URL:https://svec.org/event/defense-against-llm-and-agi-scheming-with-guardrails-and-architecture-2/
LOCATION:Valley Research Park\, 319 N Bernardo Ave\, Mountain View\, CA\, 94043\, United States
ATTACH;FMTTYPE=image/jpeg:https://svec.org/wp-content/uploads/2025/03/1024x576-2MscIn.tmp_.jpg
END:VEVENT
END:VCALENDAR