People regularly receive algorithm-based recommendations (e.g., in social media and online shopping). Today, such algorithmic recommendations cover a wider range of applications, such as finding recipes that match an individual’s needs and food preferences. To give better-fitting recommendations, algorithms incorporate large amounts of personal information about the individual, which raises the issue of privacy concerns. When are people willing to share such information that can then be considered by algorithms? While previous research has indicated that perceived benefits of platform use and provider trustworthiness influence whether people are willing to disclose personal information, these studies often have two major limitations: they are either survey-based – thus, not allowing causal conclusions – or lack realistic usage (i.e., rely on vignettes). Thus, in the present experimental study (N = 329), we manipulated provider trustworthiness and asked participants about their willingness to disclose information directly after interacting with an algorithm-based system for recipe recommendations. Results indicate that higher perceived benefits and provider trustworthiness are related to more willingness to disclose information. However, these effects are independent of each other. Thus, the present research suggests that provider trustworthiness causally leads to a higher willingness to share information and high benefits do not compensate for low provider trustworthiness.