Testing Large Language Model (LLM) Vulnerabilities Using Adversarial Attacks
July 19, 2023 Generative AI H2O LLM Studio Large language models LLM Limitations LLM Robustness LLM Safety Responsible AIAdversarial analysis seeks to explain a machine learning model by understanding locally what changes need to be made to the input to change a model’s outcome. Depending on the context, adversarial results could be used as attacks, in which a change is made to trick a model into reaching a different outcome. Or they could […]
H2O LLM DataStudio: Streamlining Data Curation and Data Preparation for LLMs related tasks
June 14, 2023 Data Preparation H2O LLM Studio Large language models LLM DataStudio NLPA no-code application and toolkit to streamline data preparation tasks related to Large Language Models (LLMs) H2O LLM DataStudio is a no-code application designed to streamline data preparation tasks specifically for Large Language Models (LLMs). It offers a comprehensive range of preprocessing and preparation functions such as text cleaning, text quality detection, tokenization, truncation, and […]
Effortless Fine-Tuning of Large Language Models with Open-Source H2O LLM Studio
May 1, 2023 H2O LLM StudioWhile the pace at which Large Language Models (LLMs) have been driving breakthroughs is remarkable, these pre-trained models may not always be tailored to specific domains. Fine-tuning — the process of adapting a pre-trained language model to a specific task or domain—plays a critical role in NLP applications. However, fine-tuning can be challenging, requiring coding […]