[Discussion] Karrot trust system and user levels

Would be cool to add something that makes sure people who deserve trust carrots also get them.

If we would require that trust is needed to keep editing permissions, then some people might get stripped of their rights even if they need them. Could implement a warning beforehand though…
It might be even worse if trust expires or if trust requirements change dynamically (e.g. needed trust is relative to group size and the group grows)

I think problems with people bringing chaos into group(-settings) can be resolved when we released the user removal (“conflict resolution”) feature - expect another post today or tomorrow!

Some time went by, I’d be interested how you find it now. Do you think enough people receive trust? Or too many?

In my opinion too many. It’s currently super easy to get editing rights . Current settings give editing rights to a user who is trusted by three other users. Do you think it’s restrictive enough? In small groups of 20 people, maybe it is. But in bigger groups this is hardly any limitation. In our group in Warsaw we have about 80 users. A lot of them have joined because they were recommended by other members so they already have someone who will trust them. So it’s only a matter of finding two more users who will trust them to get editing rights. As a result new users who have just joined the group may be given editing rights and can also take part in conflict resolutions on users they may now even know.

My proposal is to restrict editing rights more :

  • Number of people who needs to trust a user to give him/her editing rights should be higher. 5 is a minimum in my opinion and this number should be agreed by each group separately. This limitation ensures new users who get editing rights are verified by a bigger number of other users.
  • To get editing rights, a user should be trusted by at least X (e.g. 1) user with editing rights. This limitation ensures new users are not just members of small cliques who trust each other but are also trusted by someone who is already trusted.
  • New users shouldn’t get editing rights immediately after getting enough trusts. There should be a waiting period, e.g. at least a month after signing up, which has to past before a new user can get editing rights. This limitation ensures new users will have enough time to know the group more and become more able to make reliable decisions.
1 Like

Thanks for your thoughts on this, @mzpawlowski!
Did you have bad experiences with users abusing their editing rights or do you only assume that it might happen? I’d like to point out that we never intended the editing rights to turn into admin roles which only a few people have. The danger a group faces when needing approval of someone who already has editing rights is that elites form and the once open group actually closes down. I think it would be a shame if that happened.

I do agree that the number of trusts needed could adapt more to the size of the group. I’m not completely sure how it works right now, but I think it already adapts a little. Can you maybe shed light on this @tiltec? (I might write a manual page for this feature as well, if I get all the info… :wink:)

When it comes to the waiting time, I also think that this could make sense, but am unsure how long it should be. Are there more opinions on this?

This post contains some information:

This is more for bootstrapping, as soon as the group has more than 6 active members the threshold is always 3.

Before we change it, we should look into the statistics. Also, as @djahnie mentioned, it also depends on the goal we set.

1 Like

No bad experience so far with editing rights being abused. I’ve been just pointing out the possibilities to consider :slight_smile:

As long as I might understand your idea to have a group where everyone is equal, there are no admin roles or so called ‘elites’, I don’t think this is very safe. And there are probably not many examples of the systems which work this way. When a group is getting bigger and bigger, the small probability that a user has bad intentions can eventually materialize. With editing rights and bad intentions we could run into serious problems.

Imagine the following situation:
Michal who has editing rights in Karrot tends to abuse some rules and someone has finally decided to open a conflict resolution case against him. Voting takes 7 days during which Michal has signed up to Karrot using 3 new e-mail addresses and accepted his own applications form old e-mail which has editing rights. In total Michal has 4 accounts now. Each of them can be trusted by 3 other accounts which belong to him. As a result, all 4 accounts will get editing rights. Even if everyone in the voting has decided to get rid of Michal (his first account), he will still have 3 more account with editing rights. New cases can be opened against them but as long as Michal is not restricted by anything or anyone, he can continue this procedure forever.

This may sound unrealistic but it’s theoretically possible. I believe we have been able to avoid any situations like this in Warsaw because of two reasons:

  • We have a recruitment process when we can evaluate a candidate before he/she joins a group on Karrot.
  • Not many Karrot users are aware of the rights they have.

But there is no guarantee it won’t happen. That’s why I think we need more user levels and / or user roles. For example, if there was a special user role who accepts new applications, the situation with Michal couldn’t happen (if Michal didn’t have rights to accept new applications).

I’m aware it might be against your idea behind Karrot development but I just want to highlight that it also poses serious risks.

A bit of dreaming ahead: now that we have the “issues” system in place for conflict resolution, we could also implement it for other areas, for example:

  • should we change the group name?
  • should User Y be the coordinator for Store A? (then nobody else can change store A’s settings)
  • should User Z be responsible for accepting newcomers to the group? (then nobody else can accept applications or invite users)

That’s essentially the idea of “direct democracy”, but also “task delegation”, which I think are both very important for bigger groups.


I am totally aware of the risk you pointed out. :slightly_smiling_face:
To me the more important point is the following: If somebody really wants to fuck with the group, they can do so anyway. These are problems that need to be solved by humans, not by software. Human can do so much better, because they can evaluate case by case and don’t need to have one system that applies to every single case and does all of them justice. You seem to know that as well, that’s why you have a recruitment process outside of Karrot - and that is exactly what we hope all groups do.

When I read your example I think the thing to change would rather be that only one approval is needed for an application to be accepted. That’s another thing that was just a simple solution for the first iteration of a complex feature. In the future it could be that there’s a team of trusted and interested users who take care of applications and then only those can accept or decline, or that we have a similar case of adapting numbers like in the trust feature - although that would need some thinking, because negative and positive voices would need to be balanced out against each other somehow…

Anyway, there’s many things we can change, but I seriously doubt that our approach of “Don’t let potential malicious use cases guide your designs decisions!” will be the first one… :wink:

I agree 100%. We solve our problems outside of Karrot. But so far we haven’t had a real possibility to remove someone from the software although we needed once to remove someone from the community. This person still has Karrot account and could come back and mess up in the system if he knew it was possible. I’m trying to avoid a situation when someone will want to misuse his/her powers in Karrot. That’s why I’m raising all these theoretical cases :slight_smile:

Don’t be offended but for me the above sentences contradict each other. A team of trusted users who take care of applications is not different than an ‘elite’ who approves editing rights. Or I don’t fully get your point here? Anyway, this is exactly how it works in our community in Warsaw. We have a group of a few people who take care of application and recruitment process and only this group, not the whole community, accepts or declines newcomers. This is in real life because in the software everyone with editing rights can accept applications. My point is that there are always groups within communities who have special rights but they also come with greater responsibility. Such groups are not ‘elites’ though if anyone can join them.

1 Like

No you’re right, I didn’t explore the thought well enough to make sense. In this scenario the team of application managers (or whatever we wanna call them) would need to be elected by the group using the exact voting mechanism we introduced for the conflict resolution. Like @tiltec already outlined above, the voting could be used for many cases in which legitimization by the group becomes necessary. It’s not fully thought through yet, but the general idea is to combine the best of both worlds: The clear distribution of tasks that leads to users not being overwhelmed by possible responsibilities (and which groups normally have already in place anyways) and the dynamically adaptive and permeable structure of an open group that leads to everybody having the opportunity to get further involved if they show commitment and interest in a special aspect.

This is another of the core ideas we try to follow with Karrot: Not to push a certain artificial structure onto groups, but to represent the structure the groups already use in the physical world and to match its real rights and responsibilities also in the digital sphere. But that’s quite hard to do as it requires a lot of dynamic adaptability from the software, as well as constant legitimization of the group and at the same time we also don’t want to annoy the users with too many questions and options… So as you can see it’s a hard thing to balance, but we’ll continue to do our best! :slightly_smiling_face:

Well that should be changed by now, so I hope the immediate danger can be banned… :wink:


After some short digressions maybe we can come back to discussion on my proposals :slight_smile:

I added your proposals here: Follow-up tasks: trust system and user levels · Issue #1095 · yunity/karrot-frontend · GitHub

Both seem reasonable to me. The first seems easy to change, but I would additionally change the logic how the trust threshold “grows” with group size. Otherwise smaller groups have too high requirements.
The second part could be a group setting, although I’m a bit hesitant with adding more settings as there’s a complexity explosion with every customization options. And currently the code doesn’t really deal with with changing thresholds, so it might lead to unexpected behavior.

That seems reasonable to me too, although it would need some exception for groups who are just getting started. Otherwise only the group founder would have editing permissions for the first weeks. I would add this only if there’s a strong reason for it.

I had a quick look at the statistics, in the last three months in Foodsharing Warszawa:

  • 159 trust was given
  • 10 editing permissions were granted
  • 85 active members, of which 18 are newcomers (22 %)

The other big groups on karrot.world are Foodsharing i Östersund with 91 and Solikyl with 72 active members. I noticed they have a lot more newcomers (63 % and 57 %). They also have less activity (pickups+feedback+messages), so users spend less time on Karrot. I think there might be a connection.

Another interesting statistic would be “time from joining the group until gaining editing permission”, to have some guidance when adding a time threshold.

I’d really like to run a graph analysis, to identify how connected users are via trust. This should show if there are many separate “bubbles” in a group.


I have scanned this topic and it’s clear for me that currently there is no option to distrust a user. Is this correct? Because someone in Warsaw group told me that I removed a trust I gave her earlier. I can’t remember removing a trust, especially that I didn’t know it was possible at all, but it really seems that this person had a trust from me and later it wasn’t there. Supposedly, there have been a few other incidents like this. Could have it happened at some point that trusts got removed? Is it possible to get some data from the back-end to confirm or reject the hypothesis on trusts being removed?

Indeed, I already received one report that trust didn’t “stick”. I investigated a bit, but couldn’t find the problem.
You might be onto something there. Can you send me more details via private message? User names, ids and time frames are most useful!

1 Like

I found the bug and pushed the fix. It would happen when leaving a group, then also trust in other groups would get deleted.

1 Like