| Page 860 | Kisaco Research
 

Christian A. Camarce

Director
Sterne Kessler

Christian A. Camarce is a director in Sterne Kessler’s Electronics Practice Group. Christian focuses his practice on patent portfolio management and global IP strategy. Leveraging his experience as a former senior integrated circuit (IC) design engineer, Christian counsels clients to obtain and enforce patent protection in a wide variety of technologies, including IC chip design and packaging, semiconductor fabrication, and wireless communications.

Christian A. Camarce

Director
Sterne Kessler

Christian A. Camarce

Director
Sterne Kessler

Christian A. Camarce is a director in Sterne Kessler’s Electronics Practice Group. Christian focuses his practice on patent portfolio management and global IP strategy. Leveraging his experience as a former senior integrated circuit (IC) design engineer, Christian counsels clients to obtain and enforce patent protection in a wide variety of technologies, including IC chip design and packaging, semiconductor fabrication, and wireless communications.

 

Daniel S. Block

Director
Sterne Kessler

Daniel S. Block is a director in Sterne Kessler's Electronics Practice Group. Dan’s practice primarily focuses on patent and anti-counterfeiting litigation at the International Trade Commission and in Federal district court. Dan has been integral in designing and implementing anti-counterfeiting measures for numerous international brands, with a focus on cost-neutral enforcement strategies. He has also served as counsel in over 50 post-grant proceedings at the USPTO’s Patent Trial and Appeal Board.

Daniel S. Block

Director
Sterne Kessler

Daniel S. Block

Director
Sterne Kessler

Daniel S. Block is a director in Sterne Kessler's Electronics Practice Group. Dan’s practice primarily focuses on patent and anti-counterfeiting litigation at the International Trade Commission and in Federal district court. Dan has been integral in designing and implementing anti-counterfeiting measures for numerous international brands, with a focus on cost-neutral enforcement strategies. He has also served as counsel in over 50 post-grant proceedings at the USPTO’s Patent Trial and Appeal Board. His technical expertise covers many areas of computing including: computer graphics, networking communications, web services, complex computer architectures, and storage systems.

 

Jonathan Tuminaro

Ph.D.; Director
Sterne Kessler

Jonathan Tuminaro, Ph.D. is a director in Sterne Kessler’s Trial & Appellate and Electronics Practice Groups. Jonathan is an experienced trial lawyer who focuses his practice on complex electronics litigation in the U.S. district courts, the U.S. International Trade Commission (ITC), and the Patent Trial and Appeal Board (PTAB).

Jonathan Tuminaro

Ph.D.; Director
Sterne Kessler

Jonathan Tuminaro

Ph.D.; Director
Sterne Kessler

Jonathan Tuminaro, Ph.D. is a director in Sterne Kessler’s Trial & Appellate and Electronics Practice Groups. Jonathan is an experienced trial lawyer who focuses his practice on complex electronics litigation in the U.S. district courts, the U.S. International Trade Commission (ITC), and the Patent Trial and Appeal Board (PTAB). He has represented some of the leading high-tech companies in patent litigation matters relating to telecommunications, network security, LCD flat-panel displays, computer graphics, automotive technologies, GPS location-based service, and wireless power transfer.

As the era of high-performance computing (HPC) and artificial intelligence (AI) ushers in unprecedented advancements, the reliance on cloud strategies becomes vital. As cloud infrastructure becomes increasingly integral to supporting demanding computational workloads, maintaining the availability and robustness of these systems becomes paramount.

This panel will delve into the critical intersection of HPC/AI and cloud technology, spotlighting strategies for ensuring uninterrupted operations in the face of emerging challenges. The session brings together leading experts to examine architectural design paradigms that foster robustness, redundancy trade-offs, load balancing, and intelligent fault detection and predictive monitoring mechanisms. Experts will share insights on best practices for optimizing resource allocation, orchestrating seamless workload migrations, and deploying resilient cloud-native solutions. By exploring real-world cases, emerging trends, and practical insights, this discussion aims to equip data center and cloud professionals with insights to elevate their resiliency strategies amidst evolving computational demands.

Moderator

Author:

Alam Akbar

Director, Product Marketing
proteanTecs

Alam Akbar is a veteran of the semiconductor industry with experience spanning multiple engineering, product management, and product marketing roles. He holds a Bachelors of Science degree in Electrical Engineering from Texas A&M,  and an MBA from Santa Clara University.

 

Alam began his career at Synopsys as an Application Consultant where he helped grow their market share in the signoff domain. He then joined the business management team at Cadence where he helped launch a new physical verification solution. After Cadence, Alam joined  Intel Foundry services as a design kit program manager, and then moved into the client compute group as director of product marketing. There, he helped scale Intel's storage business, and developed product strategy for new memory solutions for the PC market.

At ProteanTecs, he's part of a team that’s bringing greater insight into the health and performance of semiconductors across the value chain, from the design stage to in field operation, and all the steps in the middle. 

Alam Akbar

Director, Product Marketing
proteanTecs

Alam Akbar is a veteran of the semiconductor industry with experience spanning multiple engineering, product management, and product marketing roles. He holds a Bachelors of Science degree in Electrical Engineering from Texas A&M,  and an MBA from Santa Clara University.

 

Alam began his career at Synopsys as an Application Consultant where he helped grow their market share in the signoff domain. He then joined the business management team at Cadence where he helped launch a new physical verification solution. After Cadence, Alam joined  Intel Foundry services as a design kit program manager, and then moved into the client compute group as director of product marketing. There, he helped scale Intel's storage business, and developed product strategy for new memory solutions for the PC market.

At ProteanTecs, he's part of a team that’s bringing greater insight into the health and performance of semiconductors across the value chain, from the design stage to in field operation, and all the steps in the middle. 

Panellists

Author:

Venkat Ramesh

Hardware Systems Engineer
Meta

Venkat Ramesh is a Hardware Systems Engineer in Meta's Infrastructure Org. 

 

As a Technical Lead in the Release-to-Production team, Venkat has been at the helm of pivotal initiatives aimed at bringing various AI/ML Accelerator, Compute and Storage platforms into the Meta fleet. His multifaceted technical background spans roles across software development, performance engineering, NPI and hardware health telemetry across hyper-scalers and hardware providers.

 

Deeply passionate about the topic of AI hardware resiliency, Venkat's current focus is on building tools and methodologies to enhance hardware reliability, performance and efficiencies for the rapidly evolving AI workloads and technologies.

Venkat Ramesh

Hardware Systems Engineer
Meta

Venkat Ramesh is a Hardware Systems Engineer in Meta's Infrastructure Org. 

 

As a Technical Lead in the Release-to-Production team, Venkat has been at the helm of pivotal initiatives aimed at bringing various AI/ML Accelerator, Compute and Storage platforms into the Meta fleet. His multifaceted technical background spans roles across software development, performance engineering, NPI and hardware health telemetry across hyper-scalers and hardware providers.

 

Deeply passionate about the topic of AI hardware resiliency, Venkat's current focus is on building tools and methodologies to enhance hardware reliability, performance and efficiencies for the rapidly evolving AI workloads and technologies.

Author:

Yun Jin

Engineering Director
Meta

Yun Jin currently works as Engineering Director of Infrastructure in Meta Inc where he leads the Meta's strategy of private cloud capacity and efficiency. Before Meta, Yun has been engineering leadership roles for PPLive, Alibaba Cloud, and Microsoft. Yun has worked on large scale distributed systems, cloud and big data area for 20 years.

Yun Jin

Engineering Director
Meta

Yun Jin currently works as Engineering Director of Infrastructure in Meta Inc where he leads the Meta's strategy of private cloud capacity and efficiency. Before Meta, Yun has been engineering leadership roles for PPLive, Alibaba Cloud, and Microsoft. Yun has worked on large scale distributed systems, cloud and big data area for 20 years.

Author:

Paolo Faraboschi

Vice President and HPE Fellow; Director, AI Research Lab
Hewlett Packard Labs, HPE

Paolo Faraboschi is a Vice President and HPE Fellow and directs the Artificial Intelligence Research Lab at Hewlett Packard Labs. Paolo has been at HP/HPE for three decades, and worked on a broad range of technologies, from embedded printer processors to exascale supercomputers. He previously led exascale computing research (2017-2020), and the hardware architecture of “The Machine” project (2014-2016), pioneered low-energy servers with HP’s project Moonshot (2010-2014), drove scalable system-level simulation research (2004-2009), and was the principal architect of a family of embedded VLIW cores (1994-2003), widely used in video SoCs and HP’s printers. Paolo is an IEEE Fellow (2014) for “contributions to embedded processor architecture and system-on-chip technology”, author of over 100 publications, 70 granted patents, and the book “Embedded Computing: a VLIW approach”. He received a Ph.D. in EECS from the University of Genoa, Italy.

Paolo Faraboschi

Vice President and HPE Fellow; Director, AI Research Lab
Hewlett Packard Labs, HPE

Paolo Faraboschi is a Vice President and HPE Fellow and directs the Artificial Intelligence Research Lab at Hewlett Packard Labs. Paolo has been at HP/HPE for three decades, and worked on a broad range of technologies, from embedded printer processors to exascale supercomputers. He previously led exascale computing research (2017-2020), and the hardware architecture of “The Machine” project (2014-2016), pioneered low-energy servers with HP’s project Moonshot (2010-2014), drove scalable system-level simulation research (2004-2009), and was the principal architect of a family of embedded VLIW cores (1994-2003), widely used in video SoCs and HP’s printers. Paolo is an IEEE Fellow (2014) for “contributions to embedded processor architecture and system-on-chip technology”, author of over 100 publications, 70 granted patents, and the book “Embedded Computing: a VLIW approach”. He received a Ph.D. in EECS from the University of Genoa, Italy.

Abstract coming soon...

Moderator

Author:

Karl Freund

Founder & Principal Analyst
Cambrian AI Research

Karl Freund is the founder and principal analyst of Cambrian AI Research. Prior to this, he was Moor Insights & Strategy’s consulting lead for HPC and Deep Learning. His recent experiences as the VP of Marketing at AMD and Calxeda, as well as his previous positions at Cray and IBM, positions him as a leading industry expert in these rapidly evolving industries. Karl works with investment and technology customers to help them understand the emerging Deep Learning opportunity in data centers, from competitive landscape to ecosystem to strategy.

 

Karl has worked directly with datacenter end users, OEMs, ODMs and the industry ecosystem, enabling him to help his clients define the appropriate business, product, and go-to-market strategies. He is also recognized expert on the subject of low-power servers and the emergence of ARM in the datacenter and has been a featured speaker at scores of investment and industry conferences on this topic.

Accomplishments during his career include:

  • Led the revived HPC initiative at AMD, targeting APUs at deep learning and other HPC workloads
  • Created an industry-wide thought leadership position for Calxeda in the ARM Server market
  • Helped forge the early relationship between HP and Calxeda leading to the surprise announcement of HP Moonshot with Calxeda in 2011
  • Built the IBM Power Server brand from 14% market share to over 50% share
  • Integrated the Tivoli brand into the IBM company’s branding and marketing organization
  • Co-Led the integration of HP and Apollo Marketing after the Boston-based desktop company’s acquisition

 

Karl’s background includes RISC and Mainframe servers, as well as HPC (Supercomputing). He has extensive experience as a global marketing executive at IBM where he was VP Marketing (2000-2010), Cray where he was VP Marketing (1995-1998), and HP where he was a Division Marketing Manager (1979-1995).

 

Karl Freund

Founder & Principal Analyst
Cambrian AI Research

Karl Freund is the founder and principal analyst of Cambrian AI Research. Prior to this, he was Moor Insights & Strategy’s consulting lead for HPC and Deep Learning. His recent experiences as the VP of Marketing at AMD and Calxeda, as well as his previous positions at Cray and IBM, positions him as a leading industry expert in these rapidly evolving industries. Karl works with investment and technology customers to help them understand the emerging Deep Learning opportunity in data centers, from competitive landscape to ecosystem to strategy.

 

Karl has worked directly with datacenter end users, OEMs, ODMs and the industry ecosystem, enabling him to help his clients define the appropriate business, product, and go-to-market strategies. He is also recognized expert on the subject of low-power servers and the emergence of ARM in the datacenter and has been a featured speaker at scores of investment and industry conferences on this topic.

Accomplishments during his career include:

  • Led the revived HPC initiative at AMD, targeting APUs at deep learning and other HPC workloads
  • Created an industry-wide thought leadership position for Calxeda in the ARM Server market
  • Helped forge the early relationship between HP and Calxeda leading to the surprise announcement of HP Moonshot with Calxeda in 2011
  • Built the IBM Power Server brand from 14% market share to over 50% share
  • Integrated the Tivoli brand into the IBM company’s branding and marketing organization
  • Co-Led the integration of HP and Apollo Marketing after the Boston-based desktop company’s acquisition

 

Karl’s background includes RISC and Mainframe servers, as well as HPC (Supercomputing). He has extensive experience as a global marketing executive at IBM where he was VP Marketing (2000-2010), Cray where he was VP Marketing (1995-1998), and HP where he was a Division Marketing Manager (1979-1995).

 

Panellists

Author:

Zairah Mustahsan

Senior Data Scientist
You.com

Zairah Mustahsan is a Staff Data Scientist at You.com, an AI chatbot for search, where she leverages her expertise in statistical and machine-learning techniques to build analytics and experimentation platforms. Previously, Zairah was a Data Scientist at IBM Research, researching Natural Language Processing (NLP) and AI Fairness topics. Zairah obtained her M.S. in Computer Science from the University of Pennsylvania, where she researched scikit-learn model performance. Her findings have since been used as guidelines for machine learning. Zairah is a regular speaker at AI conferences such as NeurIPS, AI4, AI Hardware & Edge AI Summit, and ODSC. Zairah has published her work in top AI conferences such AAAI and has over 300 citations. Aside from work, Zairah enjoys adventure sports and poetry.

Zairah Mustahsan

Senior Data Scientist
You.com

Zairah Mustahsan is a Staff Data Scientist at You.com, an AI chatbot for search, where she leverages her expertise in statistical and machine-learning techniques to build analytics and experimentation platforms. Previously, Zairah was a Data Scientist at IBM Research, researching Natural Language Processing (NLP) and AI Fairness topics. Zairah obtained her M.S. in Computer Science from the University of Pennsylvania, where she researched scikit-learn model performance. Her findings have since been used as guidelines for machine learning. Zairah is a regular speaker at AI conferences such as NeurIPS, AI4, AI Hardware & Edge AI Summit, and ODSC. Zairah has published her work in top AI conferences such AAAI and has over 300 citations. Aside from work, Zairah enjoys adventure sports and poetry.

Author:

Sravanthi Rajanala

Director, Machine Learning & Search
Walmart Tech

Sravanthi Rajanala is the Director of Data Science and Machine Learning in Walmart's Search Technologies. She began her career in telecom and worked for Microsoft and Nokia before joining Bing Search in 2011 to work in machine learning and search. Sravanthi has led initiatives in query and document understanding, ranking, and question answering. In 2021, she joined Walmart and now leads the Search Core Algorithms, Machine Translation, and Metrics Science. Sravanthi holds a Master's degree in Computational Science from the Indian Institute of Science and a Bachelor's degree in Computer Science from Osmania University.

Sravanthi Rajanala

Director, Machine Learning & Search
Walmart Tech

Sravanthi Rajanala is the Director of Data Science and Machine Learning in Walmart's Search Technologies. She began her career in telecom and worked for Microsoft and Nokia before joining Bing Search in 2011 to work in machine learning and search. Sravanthi has led initiatives in query and document understanding, ranking, and question answering. In 2021, she joined Walmart and now leads the Search Core Algorithms, Machine Translation, and Metrics Science. Sravanthi holds a Master's degree in Computational Science from the Indian Institute of Science and a Bachelor's degree in Computer Science from Osmania University.

Author:

Selcuk Kopru

Director, Engineering & Research, Search
eBay

Selcuk Kopru is Head of ML & NLP at eBay and is an experienced AI leader with proven expertise in creating and deploying cutting edge NLP and AI technologies and systems. He is experienced in developing scalable Machine Learning solutions to solve big data problems that involve text and multimodal data. He is also skilled in Python, Java, C++, Machine Translation and Pattern Recognition. Selcuk is also a strong research professional with a Doctor of Philosophy (PhD) in NLP in Computer Science from Middle East Technical University.

Selcuk Kopru

Director, Engineering & Research, Search
eBay

Selcuk Kopru is Head of ML & NLP at eBay and is an experienced AI leader with proven expertise in creating and deploying cutting edge NLP and AI technologies and systems. He is experienced in developing scalable Machine Learning solutions to solve big data problems that involve text and multimodal data. He is also skilled in Python, Java, C++, Machine Translation and Pattern Recognition. Selcuk is also a strong research professional with a Doctor of Philosophy (PhD) in NLP in Computer Science from Middle East Technical University.

 Memory continues to be a critical bottleneck for AI/ML systems, and keeping the processing pipeline in balance requires continued advances in high performance memories like HBM and GDDR, as well as mainstream memories like DDR. Emerging memories and new technologies like CXL offer additional possibilities for improving the memory hierarchy. In this panel, we’ll discuss important enabling technologies and key challenges the industry needs to address for memory systems going forward.

Moderator

Author:

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Panellists

Author:

David Kanter

Founder & Executive Director
MLCommons

David co-founded and is the Head of MLPerf for MLCommons, the world leader in building benchmarks for AI. MLCommons is an open engineering consortium with a mission to make AI better for everyone through benchmarks and data. The foundation for MLCommons began with the MLPerf benchmarks in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 125+ members, global technology providers, academics, and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire AI industry through benchmarks and metrics, public datasets, and measurements for AI Safety. Our software projects are generally available under the Apache 2.0 license and our datasets generally use CC-BY 4.0.

David Kanter

Founder & Executive Director
MLCommons

David co-founded and is the Head of MLPerf for MLCommons, the world leader in building benchmarks for AI. MLCommons is an open engineering consortium with a mission to make AI better for everyone through benchmarks and data. The foundation for MLCommons began with the MLPerf benchmarks in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 125+ members, global technology providers, academics, and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire AI industry through benchmarks and metrics, public datasets, and measurements for AI Safety. Our software projects are generally available under the Apache 2.0 license and our datasets generally use CC-BY 4.0.

Author:

Brett Dodds

Senior Director, Azure Memory Devices
Microsoft

Brett Dodds

Senior Director, Azure Memory Devices
Microsoft

Author:

Nuwan Jayasena

Fellow
AMD

Nuwan Jayasena is a Fellow at AMD Research, and leads a team exploring hardware support, software enablement, and application adaptation for processing in memory. His broader interests include memory system architecture, accelerator-based computing, and machine learning. Nuwan holds an M.S. and a Ph.D. in Electrical Engineering from Stanford University and a B.S. from the University of Southern California. He is an inventor of over 70 US patents, an author of over 30 peer-reviewed publications, and a Senior Member of the IEEE. Prior to AMD, Nuwan was a processor architect at Nvidia Corp. and at Stream Processors, Inc.

Nuwan Jayasena

Fellow
AMD

Nuwan Jayasena is a Fellow at AMD Research, and leads a team exploring hardware support, software enablement, and application adaptation for processing in memory. His broader interests include memory system architecture, accelerator-based computing, and machine learning. Nuwan holds an M.S. and a Ph.D. in Electrical Engineering from Stanford University and a B.S. from the University of Southern California. He is an inventor of over 70 US patents, an author of over 30 peer-reviewed publications, and a Senior Member of the IEEE. Prior to AMD, Nuwan was a processor architect at Nvidia Corp. and at Stream Processors, Inc.

Pre-training Foundation Models is prohibitively expensive and therefore impossible for many companies. This is especially true if the models are Large Language Models (LLMs). However, people hope that Foundation Models will live up to the promise of learning more generally than classical Artificial Intelligence (AI) models. The dream is that if you provide just a few examples to Foundation Models, they could extrapolate the high-level, abstract representation of the problem and learn how to accomplish tasks that they have never been trained to execute before. So, the question is, how can you lower the cost of fine-tuning pre-trained Foundation Models for your needs? This is what we will discuss in this panel. We make available to you our personal experience, synthetized in a set of principles, so that you can discover how we found ways to lower the cost of fine-tuning pre-trained Foundational Models across multiple domains. 

Moderator

Author:

Fausto Artico

Head of Innovation and Data Science
GSK

Fausto has two PhDs (Information & Computer Science respectively), earning his second master’s and PhD at the University of California, Irvine. Fausto also holds multiple certifications from MIT, Columbia University, London School of Economics and Political Science, Kellogg School of Management, University of Cambridge and soon also from the University of California, Berkeley. He has worked in multi-disciplinary teams and has over 20 years of experience in academia and industry.

As a Physicist, Mathematician, Engineer, Computer Scientist, and High-Performance Computing (HPC) and Data Science expert, Fausto has worked on key projects at European and American government institutions and with key individuals, like Nobel Prize winner Michael J. Prather. After his time at NVIDIA corporation in Silicon Valley, Fausto worked at the IBM T J Watson Center in New York on Exascale Supercomputing Systems for the US government (e.g., Livermore and Oak Ridge Labs).

Fausto Artico

Head of Innovation and Data Science
GSK

Fausto has two PhDs (Information & Computer Science respectively), earning his second master’s and PhD at the University of California, Irvine. Fausto also holds multiple certifications from MIT, Columbia University, London School of Economics and Political Science, Kellogg School of Management, University of Cambridge and soon also from the University of California, Berkeley. He has worked in multi-disciplinary teams and has over 20 years of experience in academia and industry.

As a Physicist, Mathematician, Engineer, Computer Scientist, and High-Performance Computing (HPC) and Data Science expert, Fausto has worked on key projects at European and American government institutions and with key individuals, like Nobel Prize winner Michael J. Prather. After his time at NVIDIA corporation in Silicon Valley, Fausto worked at the IBM T J Watson Center in New York on Exascale Supercomputing Systems for the US government (e.g., Livermore and Oak Ridge Labs).

Panellists

Author:

Lisa Cohen

Director of Data Science for Gemini, Google Assistant, and Search Platforms
Google

Lisa Cohen is Director of Data Science for Gemini (formerly "Bard"), Google Assistant, and Search Platforms. She leads an organization of data scientists at Google, responsible for using data to create excellent user experiences across these products, and partnering closely with Product, Engineering, and User Experience Research. Formerly, Lisa was Head of Data Science and Engineering for Twitter, helping drive the strategy and direction of the Twitter product, through machine learning, metric development, experimentation and causal analyses. Before Twitter, Lisa led the Azure Customer Growth Analytics organization as part of Microsoft Cloud Data sciences. Her team was responsible for analyzing OKRs, informing data-driven decisions, and developing data science models to help customers be successful on Azure. Lisa worked at Microsoft for 17yrs, and also helped develop multiple versions of Visual Studio. She holds Bachelor and Masters degrees from Harvard in Applied Mathematics. You can follow Lisa on LinkedIn and Medium.

Lisa Cohen

Director of Data Science for Gemini, Google Assistant, and Search Platforms
Google

Lisa Cohen is Director of Data Science for Gemini (formerly "Bard"), Google Assistant, and Search Platforms. She leads an organization of data scientists at Google, responsible for using data to create excellent user experiences across these products, and partnering closely with Product, Engineering, and User Experience Research. Formerly, Lisa was Head of Data Science and Engineering for Twitter, helping drive the strategy and direction of the Twitter product, through machine learning, metric development, experimentation and causal analyses. Before Twitter, Lisa led the Azure Customer Growth Analytics organization as part of Microsoft Cloud Data sciences. Her team was responsible for analyzing OKRs, informing data-driven decisions, and developing data science models to help customers be successful on Azure. Lisa worked at Microsoft for 17yrs, and also helped develop multiple versions of Visual Studio. She holds Bachelor and Masters degrees from Harvard in Applied Mathematics. You can follow Lisa on LinkedIn and Medium.

Author:

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Author:

Helen Byrne

VP, Solution Architect
Graphcore

Helen leads the Solution Architects team at Graphcore, helping innovators build their AI solutions using Graphcore’s Intelligence Processing Units (IPUs). She has been at Graphcore for more than 5 years, previously leading AI Field Engineering and working in AI Research, working on problems in Distributed Machine Learning. Before landing in the technology industry, she worked in Investment Banking. Her background is in Mathematics and she has a MSc in Artificial Intelligence.

Helen Byrne

VP, Solution Architect
Graphcore

Helen leads the Solution Architects team at Graphcore, helping innovators build their AI solutions using Graphcore’s Intelligence Processing Units (IPUs). She has been at Graphcore for more than 5 years, previously leading AI Field Engineering and working in AI Research, working on problems in Distributed Machine Learning. Before landing in the technology industry, she worked in Investment Banking. Her background is in Mathematics and she has a MSc in Artificial Intelligence.

 

(Moderator) Varun Mehta

Executive Director, Head of ESG Data and Technology Product Management
Morgan Stanley

(Moderator) Varun Mehta

Executive Director, Head of ESG Data and Technology Product Management
Morgan Stanley

(Moderator) Varun Mehta

Executive Director, Head of ESG Data and Technology Product Management
Morgan Stanley

Abstract coming soon...

Author:

Wayne Wang

Founder & CEO
Moffett AI

Wayne Wang is the Founder & CEO of Moffett AI, and is an expert in digital-analog hybrid circuits in Silicon Valley with 15 years of experience. His main experience is as a CPU high-speed link architect.

He has several years of experience in semiconductor entrepreneurship in Silicon Valley. He used to be the core architect of Intel and Qualcomm, and participated in the development of five generations of Intel CPU processors, with a cumulative mass production of over 5 billion pieces.

Wayne Wang

Founder & CEO
Moffett AI

Wayne Wang is the Founder & CEO of Moffett AI, and is an expert in digital-analog hybrid circuits in Silicon Valley with 15 years of experience. His main experience is as a CPU high-speed link architect.

He has several years of experience in semiconductor entrepreneurship in Silicon Valley. He used to be the core architect of Intel and Qualcomm, and participated in the development of five generations of Intel CPU processors, with a cumulative mass production of over 5 billion pieces.