One problem to solve in the context of information fusion, decision-making, and other artificial intelligence challenges is to compute justified beliefs based on evidence. In real-life examples, this evidence may be inconsistent, incomplete, or uncertain, making the problem of evidence fusion highly non-trivial. The aim of this talk is to present a new model for measuring degrees of beliefs based on evidence that may have these challenging properties. To this end, we will start by introducing two different established approaches that address the above problem: Dempster-Shafer Theory and Topological Models of Evidence. We will show their main advantages and limitations, and how combining some of their tools we got a more general model. This model can reproduce them when appropriate constraints are imposed, and, more notably, it is flexible enough to compute beliefs according to various standards that represent agents’ evidential demands. The latter novelty allows the users of our model to employ it to compute an agent’s (possibly) distinct degrees of belief, based on the same evidence, in situations when, e.g, the agent prioritizes avoiding false negatives and when it prioritizes avoiding false positives. Moreover, computing belief degrees with this model is #P-complete in general, the same computational complexity as applying Dempster-Shafer Theory. Finally, we will invite the audience to discuss possible applications of this model.