Contact Sitemap Search Login Webmail Magyar
The main website of the Hungarian Academy of Sciences
Research Institutes of the Hungarian Academy of Sciences

Workshop on Future Directions in Computational Social Choice: Abstracts


The abstracts are listed below according to the names of the speakers. See the schedule here.

Simina Brânzei: Nash Social Welfare Approximation for Strategic Agents

The fair division of resources among strategic agents is an important age-old problem that has led to a rich body of literature. At the center of this literature lies the question of whether there exist mechanisms that can implement fair outcomes, despite the strategic behavior of the agents. A fundamental objective function used for measuring fair outcomes is the Nash social welfare (NSW), mathematically defined as the geometric mean of the agents' values in a given allocation. This objective function is maximized by widely known solution concepts such as Nash bargaining and the competitive equilibrium with equal incomes. In this work we focus on the question of (approximately) implementing this objective. The starting point of our analysis is the Fisher market, a fundamental model of an economy, whose benchmark is precisely the (weighted) Nash social welfare. We study two extreme classes of valuations functions, namely perfect substitutes and perfect complements, and find that for perfect substitutes, the Fisher market mechanism has a constant price of anarchy (PoA): at most 2 and at least e^1/e (~1.44) However, for perfect complements, the Fisher market mechanism has arbitrarily bad performance, its bound degrading linearly with the number of players. Strikingly, the Trading Post mechanism---an indirect market mechanism also known as the Shapley-Shubik game---has significantly better performance than the Fisher Market on its own benchmark. Not only does Trading Post attain a bound of 2 for perfect substitutes, but it also implements almost perfectly the NSW objective for perfect complements, where it achieves a price of anarchy of 1 + \epsilon for every \epsilon > 0. Moreover, we show that all the equilibria of the Trading Post mechanism are pure and satisfy an important notion of individual fairness known as proportionality. This is joint work with Vasilis Gkatzelis and Ruta Mehta.

Markus Brill: Approval Voting, Representation, and Liquid Democracy

In this talk, I will give an overview of recent results on justified representation, and I will discuss some ideas on applying computational social choice research to emerging application areas such as Liquid Democracy and Participatory Budgeting.

László Csató: Two Applications of Axiomatic Ranking

In several decision-making problems, information is available on paired comparisons between the alternatives. We address two potential fields of applications, ranking in Swiss-system chess team tournaments and university ranking on the basis of revealed preferences. The former involves possible missing values since two teams may have played or not played against each other, while the latter also allows for multiple comparisons between the same alternatives. An axiomatic approach is followed in both cases, which leads to the emergence of some new properties as reasonable requirements. It is argued that a promising research direction is the integration of the pure axiomatic analysis and real-world problems, especially since characterizations on the entire domain pose a significant challenge.

Ágnes Cseh: Popular Matchings

We investigate a scenario where each agent in a graph has an ordered preference list over some other agents and our task is to find an optimal matching of the agents so that it represents a certain global social welfare. A matching M is popular if there is no matching M' such that the vertices that prefer M' to M outnumber those that prefer M to M'. In this talk, we will summarize known results about the existence and structure of popular matchings under various assumptions such as bipartite or general graphs, strictly ordered lists or lists containing ties and edge costs. We will also pose several open questions.

Andreas Darmann: Ordinal Group Activity Selection

We consider the situation in which group activities need to be organized for a set of agents. Each agent can take part in at most one activity, and the agents' preferences depend both on the activity and the number of participants in that activity. In particular, the preferences are given by means of strict orders over such pairs "(activity, group size)'', including the possibility "do nothing''. Our goal will be to assign agents to activities on basis of their preferences, the minimum requirement being that no agent prefers doing nothing, i.e., not taking part in any activity at all. We aim at establishing such an assignment by two approaches. On the one hand, taking a voting-theoretical perspective, we apply Borda scores and the Condorcet criterion to our setting. On the other hand, we target at a Pareto optimal assignment. We analyse the computational complexity involved in finding such desired assignments, with focus on two natural special cases of the agents' preferences.

Ronald de Haan: Going Beyond the Traditional Complexity Analysis

An important topic in computational social choice is to study the computational complexity of different problems related to social choice procedures. For some of these problems (e.g., computing the winner of an election) one would like positive complexity results, and for other problems (e.g., strategic manipulation or bribery) negative complexity results are desired. The motivation behind wanting negative results for the latter type of problem is that computational intractability could act as a barrier against various kinds of unwanted behavior. For example, if strategic manipulation is computationally intractable, an agent would be inhibited in reporting a false preference or opinion.

Traditionally, the inherent difficulty of problems has been studied using the classical theory of complexity. Such classical complexity results can be overly negative, as these are based on the absolute worst-case instance possible. It could be the case (and it is often likely) that such worst-case instances do not occur in the application domain that one is interested in. In response to this downside of classical complexity, much research in computational social choice has been using the theory of parameterized complexity. In this complexity framework, one can dismiss instances that lack a particular type of structure, leading to more refined computational complexity results.

In this talk, we will argue that conventional parameterized complexity investigations can also yield results that are overly negative, and that could lead to a false sense of safety regarding computational barriers against undesirable behavior in social choice settings. We discuss several possible situations where naively taking (parameterized) complexity results at face value could give an inaccurate image of the extent to which computational intractability hinders undesirable behavior. We consider various ways in which one could improve the complexity analysis of problems to avoid such illusive conclusions, and we describe some recent results for one of these more sensitive types of complexity analysis.

Zsuzsanna Jankó: Various Stable Matching Concepts

We consider stable matching problems in two-sided markets. We describe the preferences of each agent using choice functions, thus we can describe one-to-many and many-to-many allocations, too. A famous variant is the Hungarian college admission model, where students have a strict preference order over colleges, while colleges assign integer scores to their applicants, and if multiple students have equal scores the college has to accept all or none. Each college has a quota it cannot exceed. Due to this all-or-none rule, the choice functions of colleges do not satisfy the Irrelevance of Rejected Contracts property, which is implicitly assumed in many papers that define preferences based on an ordering over all acceptable contract sets. We define multiple stability concepts that can deal with the lack of IRC, compare them to each other, and explore the lattice property using Tarski's fixed point theorem. This talk is based on the speaker's thesis and joint papers with Tamás Fleiner.

Svetlana Obraztsova: Voting Games: Trembling Hand Equilibria

Traditionally, computational social choice focuses on evaluating voting rules by their resistance to strategic behaviours, and uses computational complexity as a barrier to them. In contrast, recent works (counting from 2010) take another natural approach and analyse voting scenarios from a game-theoretic perspective, viewing strategic parties as players and examining possible stable outcomes of their interaction (i.e., equilibria). The main problem of this approach is multiple unrealistic Nash equilibria. Fortunately, several refinements have been developed that allow to filter out some undesirable Nash Equilibria. In this talk, I will describe the most recent of these refinements  -- Trembling Hand Equilibria -- and its application to two voting models, based on the Approval and the Plurality voting rules.

Jan Christop Schlegel: Some Properties of Ex-ante Stable Lotteries

We study the allocation of indivisible objects (e.g., school seats) to agents by lotteries. Agents have preferences over different objects and have different priorities at different objects. The priorities can contain indifferences, i.e., some agents may have the same priority at some object. For this model the fairness condition of ex-ante stability has been suggested. A lottery is ex-ante stable if there does not exist an agent-object pair such that we can increase the probability of matching this pair at the expense of agents, who have lower priority at the object, and of objects which are less preferred by the agent.

As a first result, we show that this fairness condition is very demanding: Ex-ante stable lotteries have small support meaning that only few agent-object pairs have a positive probability of being matched. The number of pairs in the support depends on how many indifferences in the priorities the lottery exploits. In the extreme case where no object is matched with positive probability to two equal priority agents, the lottery is almost degenerate. Otherwise, the size of the support is completely determined by the size of the lowest priority classes of which agents are matched to the respective objects. We interpret our result as an impossibility result. With ex-ante stability one cannot go much beyond randomly breaking ties and implementing a (deterministically) stable matching with respect to the broken ties.

As a second result, we derive a new characterization of the set of lotteries that can be decentralized by a pseudo-market with priority-specific pricing as introduced by He et al. (2015). These allocations coincide with the ex-ante stable lotteries that do not admit a strong stable improvement cycle.

Both results are derived by combinatorial arguments using a graph representation of the lotteries.

Piotr Skowron: Multiwinner Election Rules: Axioms and Applications

We consider multiwinner election rules and discuss some of their properties. In particular we discuss the concept of proportional representation of multiwinner election rules. Informally, proportional representation requires that the extent to which a particular preference or opinion is represented in the outcome of elections should be proportional to the frequency with which this preference or opinion occurs within the population. We extend the principle of proportional representation to rankings, as a formal measure of diversity of a collective ranking. There are numerous applications where such diversity within collective rankings is desirable: from displaying a diversified search results to users of a search engine, through accommodating different types of users in rankings provided by recommendation systems, to supporting decision-making processes under liquid democracy or even finding committees of alternatives with additional underlying structure.

Marija Slavkovik: Extending Judgment Aggregation

Judgment aggregation problems are a class of collective decision-making problems represented in an abstract way, subsuming some well-known collective decision-making problems such as voting. The aggregation of both subjective and value judgments can be modelled as judgment aggregation problems. In the standard judgment aggregation framework, aggregation is operationalised by a function applied to Boolean judgments. We motivate, discuss, and propose two different extensions of the standard framework. The first extension, aimed at aggregation of subjective judgments, is to use iterative aggregation instead of an aggregation function. The second extension, aimed at aggregation of value judgments, extends  the framework to allow for probabilistic instead of Boolean judgments to be aggregated.

Balázs Sziklai: Group Identification in Online Communities

Online communities play an ever increasing role in our life. From the market's point of view these communities are a huge opportunity but also a challenge for which we need to design clever mechanisms. So far engineers and computer scientists have done the lion's share of the work, economists have yet to enter the scene. In this talk I will focus on the group identification problem, more specifically on how to identify experts in a social network. Expert selection methods can be used (among others) to identify professionals in technical support communities or to generate content recommendation in social media. I will provide an algorithm which is axiomatically well-founded. The key idea is that experts are more effective in identifying each other, thus the selected individuals should support each other's membership. To demonstrate the effectiveness of the algorithm I present a case study based on citation data.

The Institute's
networking infrastructure
is provided by NIIF.
MTA - Magyar Tudományos Akadémia HAS
Hungarian Academy of Sciences
 
Page last modified: 09 October, 2016