- LLMs Research
- Posts
- LLMs related research papers published on 8th May, 2024
LLMs related research papers published on 8th May, 2024
Today' edition covers important papers which demonstrates LLMs deployment in physical surgery, effectively selecting LLMs deployment configuration, method to increase inference time and more
π takeaways from todayβs papers
A new research paper using LLMs powered robot in physical surgery!
Conv-basis paper modify the approximation method to reduce the attention layer computation
Vidur framework finds an optimal LLMs deployment configuration
Cohere AI published a new study paper to analyze under trained tokens in multiple LLMs
π¬Core research improving LLMs!
π‘Why?: LLMs have a quadratic computational cost, which limits their scalability and further improvement in longer context.
π»How?:
Paper leverages the convolution-like structure of attention matrices to develop an efficient approximation method for attention computation using convolution matrices.
Paper propose a πΌπππ basis system, "similar" to the rank basis, and show that any lower triangular (attention) matrix can always be decomposed as a sum of k structured convolution matrices in this basis system.
Paper further designs an algorithm to quickly decompose the attention matrix into k convolution matrices. Thanks to Fast Fourier Transforms (FFT), the attention {it inference} can be computed in O(knd*logn) time, where d is the hidden dimension.
In practice, dβͺn, i.e., d=3,072 and n=1,000,000 for Gemma. Thus, when kd=no(1), algorithm achieve almost linear time, i.e., n1+o(1). Furthermore, the attentionβs forward training and backward gradient can be computed in n1+o(1) as well.
These approach can avoid explicitly computing the nΓn attention matrix, which may largely alleviate the quadratic computational complexity.
Furthermore, this algorithm works on any input matrices. This work provides a new paradigm for accelerating attention computation in transformers to enable their application to longer contexts.
βοΈLLMOps + semi-conductor enhancement
πGitHub: https://github.com/microsoft/vidur
π‘Why?: Deployment of LLMs can still be optimized. At the moment, it is very expensive and time-consuming due to the need for experimentally running an application workload against different LLM implementations and exploring a large configuration space of system knobs.
π» How?: The research paper proposes a solution called Vidur - a large-scale, high-fidelity, and easily-extensible simulation framework for LLM inference performance. Vidur uses a combination of experimental profiling and predictive modeling to model the performance of LLM operators. It then evaluates the end-to-end inference performance for different workloads by estimating key metrics such as latency and throughput. This allows for efficient exploration of the large configuration space and helps identify the most cost-effective deployment configuration that meets application performance constraints.
πResults: The research paper does not explicitly mention any specific performance improvement achieved by Vidur. However, it does mention that Vidur estimates inference latency with less than 9% error across a wide range of LLMs, indicating its high fidelity. Additionally, the paper gives an example of how Vidur-Search, a configuration search tool built on top of Vidur, was able to find the best deployment configuration for LLaMA2-70B in just one hour on a CPU machine!
π‘Why?: LLMs still have a slow inference time specifically in the prompt phase where the first token is generated.
π»How?: Paper proposes an efficient parallelization scheme called KV-Runahead. This scheme orchestrates multiple processes to populate a key-value cache (KV-cache) during the extension phase, which is faster than the prompt phase. This minimizes the time-to-first-token (TTFT) and allows for faster generation of subsequent tokens. The KV-cache is also dual-purposed for the extension phase, reducing computation and making implementation easier. Context-level load-balancing is also used to handle uneven KV-cache generation and optimize TTFT.
πResults: The research paper shows that KV-Runahead can offer over 1.4x and 1.6x speedups for Llama 7B and Falcon 7B, respectively, compared to existing parallelization schemes. This demonstrates a significant performance improvement for LLM inference.
π§ͺ LLMs evaluations
π‘Why?: Inspects the biased and incorrect results generated by LLMs
π»How?: Proposes a benchmark to detect these 'alignment fakers' by creating a set of 324 pairs of LLMs that have been fine-tuned to select actions in role-play scenarios. One model in each pair is consistently aligned, while the other model purposely misbehaves in scenarios where it is unlikely to be caught. The benchmark task is to identify the misbehaving model using only inputs where both models behave identically. The research paper tests five different detection strategies, one of which is able to identify 98% of the alignment fakers.
π‘Why?: To asses the systemic biases perpetuated by LLMs in modern societies, particularly in user-facing applications and enterprise tools. These biases can have harmful effects on marginalized groups and lead to discrimination and exclusion.
π»How?: Paper introduces the Covert Harms and Social Threats (CHAST) metrics, which are based on social science literature. These metrics provide a more comprehensive and nuanced understanding of the various forms in which harms can manifest in LLM-generated conversations. To evaluate the presence of covert harms, the researchers use evaluation models aligned with human assessments. This allows for a more accurate detection of harmful views expressed in seemingly neutral language.
π‘Why?: Set the benchmarks for βLLMs in programming repairβ
π»How?: Paper proposes a novel educational program repair benchmark that curates two high-quality programming datasets and introduces a unified evaluation procedure with a new evaluation metric called rouge@k to measure the quality of repairs. This benchmark will facilitate the comparison of different approaches in program repair and help establish baseline performance.
π‘Why?: Set the benchmark of how easily LLMs can negotiate fairly within a game-theoretical framework.
π»How?: The research paper proposes to use simulations and gradient-boosted regression with Shapley explainers to measure the impact of different personality traits on negotiation behaviour and outcomes. It works by analyzing data from 1,500 simulations of single-issue and multi-issue negotiations and identifying patterns between personality traits and negotiation tendencies.
πWant to read more? Survey papers
Research paper proposes a comprehensive analysis of LLM tokenizers, using a combination of tokenizer analysis, model weight-based indicators, and prompting techniques. This allows for the automatic detection of problematic tokens, which can then be addressed to improve the efficiency and safety of language models.
π Creative ways to use LLMs!!
π‘Why?: Can we enhance conventional ML algorithm with LLMs?
π»How?: The research paper proposes to use pre-trained LLMs to improve the prediction performance of classical machine learning estimators. This is achieved by integrating LLM into the estimator and utilizing its ability to simulate various scenarios and generate output based on specific instructions and multimodal input. The proposed approaches are tested on both standard supervised learning binary classification tasks and a transfer learning task with distribution changes in the test data compared to the training data.
Information Extraction from Historical Well Records Using A Large Language Model interesting domain adoption!
π‘Why?: Can we locate orphaned wells, which are abandoned oil and gas wells that pose environmental risks and impacts if they are not properly located and plugged using LLMs?
Paper is proposing a RAG setup to extract information from unstructured historical documents!!
π‘Why?: The research paper addresses the issue of efficient communication within System of Systems (SoS), which is a complex and dynamic network of constituent systems called holons. In order to ensure interoperability and collaborative functioning among these holons, there is a need for improved communication mechanisms.
π»How?: The research paper proposes an innovative solution to enhance holon communication within SoS by integrating Conversational Generative Intelligence (CGI) techniques. This approach utilizes advancements in CGI, specifically LLMs, to enable holons to understand and act on natural language instructions. By leveraging CGI, holons can have more intuitive interactions with humans, improving social intelligence and ultimately leading to better coordination among diverse systems.
π‘Why?: Paper tries to develop an effective way to analyze large amounts of qualitative data from online discussion forums.
π»How?: The research paper proposes a novel framework called QuaLLM, which is based on LLM (Latent Language Model) technology. The framework uses a unique prompting methodology and evaluation strategy to analyze and extract quantitative insights from text data on online forums. It works by automatically processing the data and generating human-readable outputs, eliminating the need for significant human effort.
πResults: The research paper successfully applied the QuaLLM framework to analyze over one million comments from two Reddit's ride share worker communities. The framework sets a new precedent for AI-assisted quantitative data analysis and has the potential to improve the efficiency and accuracy of analyzing data from online forums.
π€ LLMs for robotics & VLLMS
π‘Why?: Can we leverage LLMs for object navigation within complex scenes? Specifically about how to effectively represent and utilize language information for this task.
π»How?: The research paper proposes a novel language-driven object-centric image representation, called LOC-ZSON, which is specifically designed for object navigation. This representation is used to fine-tune a visual-language model (VLM) and handle complex object-level queries. In addition, the paper also introduces a novel LLM-based augmentation and prompt templates to improve training stability and zero-shot inference. The proposed method is implemented on the Astro robot and tested in both simulated and real-world environments.
πGitHub: orbit-surgical.github.io/sufia
π‘Why?: The research paper addresses the problem of limited dexterity and autonomy in robotic surgical assistants.
π»How?: The paper proposes SuFIA, a framework that combines natural language processing with perception modules to enable high-level planning and low-level control of a surgical robot. It uses LLMs to reason and make decisions, allowing for a learning-free approach to surgical tasks without the need for examples or motion primitives. The framework also incorporates a human-in-the-loop paradigm, giving control back to the surgeon when necessary to mitigate errors and ensure mission-critical tasks are completed successfully.
πResults: The paper evaluates SuFIA on four surgical sub-tasks in a simulation environment and two sub-tasks on a physical surgical robotic platform in the lab. The results demonstrate that SuFIA is able to successfully perform common surgical tasks with supervised autonomous operation, even under challenging physical and workspace conditions.
Reply