Search Button
RSS icon Sort by:
Testing Large Language Model (LLM) Vulnerabilities Using Adversarial Attacks
by Venkatesh Yadav July 19, 2023 Generative AI H2O LLM Studio Large language models LLM Limitations LLM Robustness LLM Safety Responsible AI

Adversarial analysis seeks to explain a machine learning model by understanding locally what changes need to be made to the input to change a model’s outcome. Depending on the context, adversarial results could be used as attacks, in which a change is made to trick a model into reaching a different outcome. Or they could […]

Read More
H2O LLM DataStudio: Streamlining Data Curation and Data Preparation for LLMs related tasks
by Parul Pandey June 14, 2023 Data Preparation H2O LLM Studio Large language models LLM DataStudio NLP

A no-code application and toolkit to streamline data preparation tasks related to Large Language Models (LLMs) H2O LLM DataStudio is a no-code application designed to streamline data preparation tasks specifically for Large Language Models (LLMs). It offers a comprehensive range of preprocessing and preparation functions such as text cleaning, text quality detection, tokenization, truncation, and […]

Read More
LLM blog header
Effortless Fine-Tuning of Large Language Models with Open-Source H2O LLM Studio
by h2oai May 1, 2023 H2O LLM Studio

While the pace at which Large Language Models (LLMs) have been driving breakthroughs is remarkable, these pre-trained models may not always be tailored to specific domains. Fine-tuning — the process of adapting a pre-trained language model to a specific task or domain—plays a critical role in NLP applications. However, fine-tuning can be challenging, requiring coding […]

Read More