Customer-obsessed science
![Amazon Science homepage.jpeg](https://cdn.statically.io/img/assets.amazon.science/dims4/default/44d6323/2147483647/strip/true/crop/1383x1200+208+0/resize/400x347!/quality/90/?url=http%3A%2F%2Famazon-topics-brightspot.s3.amazonaws.com%2Fscience%2F48%2F0f%2F1db2f1004b82a99a0175ff391d53%2Famazon-science-homepage.jpeg)
![Amazon Science Fulfillment Center OAK4 in Tracy, CA](https://cdn.statically.io/img/assets.amazon.science/dims4/default/15069c5/2147483647/strip/true/crop/1254x1091+191+0/resize/200x174!/quality/90/?url=http%3A%2F%2Famazon-topics-brightspot.s3.amazonaws.com%2Fscience%2F70%2Fbe%2F94c4a60445f999ef19050df7cad2%2Famazon-science-homepage-box.jpeg)
-
July 03, 2024Gradient-boosted decision trees aggregate model outputs, and Shapley values help identify the most useful models for the ensemble.
-
June 13, 2024The fight against hallucination in retrieval-augmented-generation models starts with a method for accurately assessing it.
-
June 13, 2024As in other areas of AI, generative models and foundation models — such as vision-language models — are a hot topic.
-
-
July 14 - 18, 2024
-
July 21 - 27, 2024
-
August 11 - 16, 2024
-
July 08, 2024
University students will compete for cash prizes in a competition to securely advance LLMs that code.
-
2024Image to image matching has been well studied in the computer vision community. Previous studies mainly focus on training a deep metric learning model matching visual patterns between the query image and gallery images. In this study, we show that pure image-to-image matching suffers from false positives caused by matching to local visual patterns. To alleviate this issue, we propose to leverage recent
-
Information Retrieval (IR) practitioners often train separate ranking models for different domains (geo-graphic regions, languages, stores, websites,...) as it is believed that exclusively training on in-domain data yields the best performance when sufficient data is available. Despite their performance gains, training multiple models comes at a higher cost to train, maintain and update compared to having
-
Methods to evaluate Large Language Model (LLM) responses and detect inconsistencies, also known as hallucinations, with respect to the provided knowledge, are becoming increasingly important for LLM applications. Current metrics fall short in their ability to provide explainable decisions, systematically check all pieces of information in the response, and are often too computationally expensive to be used
-
Transactions on Machine Learning Research2024We consider a local planner that utilizes model predictive control to locally deviate from a prescribed global path in response to dynamic environments, taking into account the system dynamics. To ensure the consistency between the local and global paths, we introduce the concept of locally homotopic paths for paths with different origins and destinations. We then formulate a hard constraint to ensure that
-
2024We study the problem of differentially private (DP) fine-tuning of large pre-trained models – a recent privacy-preserving approach suitable for solving downstream tasks with sensitive data. Existing work has demonstrated that high accuracy is possible under strong privacy constraint, yet requires significant computational overhead or modifications to the network architecture. We propose differentially private
Resources
-
We look for talent from around the world for applied scientists, data scientists, economists, research scientists, scholars, academics, PhDs, and interns.
-
We hire world-class academics to work on large-scale technical challenges, while they continue to teach and conduct research at their universities. Learn more about each program and how to apply below.
-
Supporting research at academic institutions and non-profit organizations in areas that align with our mission to advance customer-obsessed science.