- A Third-Order Gaussian Process Trajectory Representation Framework with Closed-Form Kinematics for Continuous-Time Motion Estimation In this paper, we propose a third-order, i.e., white-noise-on-jerk, Gaussian Process (GP) Trajectory Representation (TR) framework for continuous-time (CT) motion estimation (ME) tasks. Our framework features a unified trajectory representation that encapsulates the kinematic models of both SO(3)timesR^3 and SE(3) pose representations. This encapsulation strategy allows users to use the same implementation of measurement-based factors for either choice of pose representation, which facilitates experimentation and comparison to achieve the best model for the ME task. In addition, unique to our framework, we derive the kinematic models with the closed-form temporal derivatives of the local variable of SO(3) and SE(3), which so far has only been approximated based on the Taylor expansion in the literature. Our experiments show that these kinematic models can improve the estimation accuracy in high-speed scenarios. All analytical Jacobians of the interpolated states with respect to the support states of the trajectory representation, as well as the motion prior factors, are also provided for accelerated Gauss-Newton (GN) optimization. Our experiments demonstrate the efficacy and efficiency of the framework in various motion estimation tasks such as localization, calibration, and odometry, facilitating fast prototyping for ME researchers. We release the source code for the benefit of the community. Our project is available at https://github.com/brytsknguyen/gptr. 8 authors · Oct 30, 2024
3 Learning to Compress Prompt in Natural Language Formats Large language models (LLMs) are great at processing multiple natural language processing tasks, but their abilities are constrained by inferior performance with long context, slow inference speed, and the high cost of computing the results. Deploying LLMs with precise and informative context helps users process large-scale datasets more effectively and cost-efficiently. Existing works rely on compressing long prompt contexts into soft prompts. However, soft prompt compression encounters limitations in transferability across different LLMs, especially API-based LLMs. To this end, this work aims to compress lengthy prompts in the form of natural language with LLM transferability. This poses two challenges: (i) Natural Language (NL) prompts are incompatible with back-propagation, and (ii) NL prompts lack flexibility in imposing length constraints. In this work, we propose a Natural Language Prompt Encapsulation (Nano-Capsulator) framework compressing original prompts into NL formatted Capsule Prompt while maintaining the prompt utility and transferability. Specifically, to tackle the first challenge, the Nano-Capsulator is optimized by a reward function that interacts with the proposed semantics preserving loss. To address the second question, the Nano-Capsulator is optimized by a reward function featuring length constraints. Experimental results demonstrate that the Capsule Prompt can reduce 81.4% of the original length, decrease inference latency up to 4.5x, and save 80.1% of budget overheads while providing transferability across diverse LLMs and different datasets. 6 authors · Feb 28, 2024
- Quantum Advantage Actor-Critic for Reinforcement Learning Quantum computing offers efficient encapsulation of high-dimensional states. In this work, we propose a novel quantum reinforcement learning approach that combines the Advantage Actor-Critic algorithm with variational quantum circuits by substituting parts of the classical components. This approach addresses reinforcement learning's scalability concerns while maintaining high performance. We empirically test multiple quantum Advantage Actor-Critic configurations with the well known Cart Pole environment to evaluate our approach in control tasks with continuous state spaces. Our results indicate that the hybrid strategy of using either a quantum actor or quantum critic with classical post-processing yields a substantial performance increase compared to pure classical and pure quantum variants with similar parameter counts. They further reveal the limits of current quantum approaches due to the hardware constraints of noisy intermediate-scale quantum computers, suggesting further research to scale hybrid approaches for larger and more complex control tasks. 7 authors · Jan 13, 2024
1 AXLearn: Modular Large Model Training on Heterogeneous Infrastructure We design and implement AXLearn, a production deep learning system that facilitates scalable and high-performance training of large deep learning models. Compared to other state-of-the-art deep learning systems, AXLearn has a unique focus on modularity and support for heterogeneous hardware infrastructure. AXLearn's internal interfaces between software components follow strict encapsulation, allowing different components to be assembled to facilitate rapid model development and experimentation on heterogeneous compute infrastructure. We introduce a novel method of quantifying modularity via Lines-of-Code (LoC)-complexity, which demonstrates how our system maintains constant complexity as we scale the components in the system, compared to linear or quadratic complexity in other systems. This allows integrating features such as Rotary Position Embeddings (RoPE) into AXLearn across hundred of modules with just 10 lines of code, compared to hundreds as required in other systems. At the same time, AXLearn maintains equivalent performance compared to state-of-the-art training systems. Finally, we share our experience in the development and operation of AXLearn. 37 authors · Jul 7, 2025 1
- KyFrog: A High-Security LWE-Based KEM Inspired by ML-KEM KyFrog is a conservative Learning-with-Errors (LWE) key-encapsulation mechanism designed to explore an alternative operating point compared to schemes with relatively small public keys and ciphertexts. KyFrog uses a larger dimension (n = 1024) and a small prime modulus q = 1103, together with narrow error distributions with standard deviations σ_s = σ_e = 1.4, to target approximately 2^{325} classical and quantum security against state-of-the-art lattice attacks under standard cost models, as estimated using the Lattice Estimator. The price paid for this security margin is an extremely large KEM ciphertext (about 0.5 MiB), while public and secret keys remain in the same ballpark as ML-KEM. We describe the design rationale, parameter search methodology, and implementation details of KyFrog, and we compare its asymptotic security and concrete parameter sizes with the ML-KEM standard. All code and data for this work are released as free and open-source software, with the full C++23 implementation and experimental scripts available at: https://github.com/victormeloasm/kyfrog 2 authors · Dec 6, 2025
- edible polysaccharides as stabilizers and carriers for the delivery of phenolic compounds and pigments in food formulations Food polysaccharides have emerged as suitable carriers of active substances and as additives to food and nutraceutical formulations, showing potential to stabilize bioactive compounds during the storage of microencapsulate preparations, even in the gastrointestinal tract following the intake of bioactive compounds, thereby improving their bioaccessibility and bioavailability. This review provides a comprehensive overview of the main polysaccharides employed as wall materials, including starch, maltodextrin, alginate, pectin, inulin, chitosan, and gum arabic, and discusses how structural interactions and physicochemical properties can benefit the microencapsulation of polyphenols and pigments. The main findings and principles of the major encapsulation techniques, including spray drying, freeze drying, extrusion, emulsification, and coacervation, related to the production of microparticles, were briefly described. Polysaccharides can entrap hydrophilic and hydrophobic compounds by physical interactions, forming a barrier around the nucleus or binding to the bioactive compound. Intermolecular binding between polysaccharides in the wall matrix, polyphenols, and pigments in the nucleus can confer up to 90% of encapsulation efficiency, governed mainly by hydrogen bonds and electrostatic interactions. The mixture of wall polysaccharides in the microparticles synthesis favors the encapsulation solubility, storage stability, bioaccessibility, and bioactivity of the microencapsulate compounds. Clinical trials on the bioefficacy of polyphenols and pigments loaded in polysaccharide microparticles are scarce and require further evidence to reinforce the use of this technology. 7 authors · Nov 10, 2025
- JavaBench: A Benchmark of Object-Oriented Code Generation for Evaluating Large Language Models Code generation benchmarks such as HumanEval are widely adopted to evaluate LLMs' capabilities. However, after consolidating the latest 24 benchmarks, we noticed three significant imbalances. First, imbalanced programming language. 95.8% of benchmarks involve Python, while only 5 benchmarks involve Java. Second, imbalanced code granularity. Function-/statement-level benchmarks account for over 83.3% of benchmarks. Only a mere handful extends to class-/project-levels, and all are limited to Python. Third, lacking advanced features. Existing benchmarks primarily assess basic coding skills, while overlooking advanced Object-Oriented Programming (OOP) features (i.e., encapsulation, inheritance, and polymorphism). To fill these gaps, we propose JavaBench, a project-level Java benchmark that exercises OOP features. It comprises four Java projects with 389 methods in 106 Java classes. The test coverage is up to 92%, and JavaBench is attested by 282 undergraduate students, reaching a 90.93/100 average score (i.e., pass rate against the test suite), ensuring the quality of documentation, code skeleton, and tests. To better evaluate LLM's capability against JavaBench, we introduce a systematic evaluation design covering three context settings and five synthesis strategies at two granularities using three hierarchical metrics. Our extensive experiment yields several interesting findings. First, we noticed that regarding project-level Java programming, LLMs are far behind undergraduate students (no project can be correctly completed by any studied LLMs, and at most 41.17% Pass@5 in a more relaxed evaluation). Second, using method signature as prompt context may strike an ideal balance for project-level code generation. JavaBench is publicly available at https://github.com/java-bench/JavaBench. 5 authors · Jun 10, 2024
- OOP: Object-Oriented Programming Evaluation Benchmark for Large Language Models Advancing automated programming necessitates robust and comprehensive code generation benchmarks, yet current evaluation frameworks largely neglect object-oriented programming (OOP) in favor of functional programming (FP), e.g., HumanEval and MBPP. To address this, our study introduces a pioneering OOP-focused benchmark, featuring 431 Python programs that encompass essential OOP concepts and features like classes and encapsulation methods. We propose a novel evaluation metric, pass@o, tailored for OOP, enhancing traditional pass@k measures. Our evaluation of 23 leading large language models (LLMs), including both general and code-specialized models, reveals three key insights: 1) pass@o offers a more relevant and comprehensive assessment for OOP code generation; 2) Despite excelling in FP, code-specialized LLMs like WizardCoder lag in OOP compared to models like ChatGPT; 3) The poor performance of all advanced LLMs on our OOP benchmark highlights a critical need for improvements in this field. Our benchmark and scripts are publicly released at: https://github.com/alphadl/OOP-eval. 6 authors · Jan 12, 2024
- Ludwig: a type-based declarative deep learning toolbox In this work we present Ludwig, a flexible, extensible and easy to use toolbox which allows users to train deep learning models and use them for obtaining predictions without writing code. Ludwig implements a novel approach to deep learning model building based on two main abstractions: data types and declarative configuration files. The data type abstraction allows for easier code and sub-model reuse, and the standardized interfaces imposed by this abstraction allow for encapsulation and make the code easy to extend. Declarative model definition configuration files enable inexperienced users to obtain effective models and increase the productivity of expert users. Alongside these two innovations, Ludwig introduces a general modularized deep learning architecture called Encoder-Combiner-Decoder that can be instantiated to perform a vast amount of machine learning tasks. These innovations make it possible for engineers, scientists from other fields and, in general, a much broader audience to adopt deep learning models for their tasks, concretely helping in its democratization. 3 authors · Sep 17, 2019