Moderation of the Community

Good afternoon,

I thought I would update you all on some of the work we have been doing this week following the recent SPAM issues on the website.

As you know, due to a number of resource constraints here, we are not actively able to monitor the community on a daily basis, but rely on the regular visitors of the community, like yourselves, to alert us of any issues.

We know how important the community is to many of you and we love that you feel so passionately about it and want to maintain standards within it. To ensure we can address the moderation issues, we are currently speaking to all the colleagues who are moderators regarding their involvement in the community and we are also exploring other options to expand the resources of those who are able to moderate content in a more efficient way. Some of these solutions include "out of hours" and weekend support as we know the community is busy all day, every day.

While we work on these solutions, I thought I would reiterate the ideal way to report spam or abusive content on the community and to "flag" the concern to us. The best way to do this, if you are unsure, is in one of the following ways:

1. In the forum thread, next to any of the posts or replies there is a MORE option. Select this and then select the "Flag as spam/abuse" option (see example)

or

2. In a blog or news item select the "Flag as spam/abuse" option in the right-hand column (see example)

Now currently, spam or abusive content is not automatically removed if only one person complains about it; this is to ensure fairness within the community. But, if more than two contributors report it then there are automatic rules that will, in some cases, remove the content immediately.

These rules are based on a number of settings, which we are currently reviewing, but in basic terms, it is to do with the credibility and authority any user has within the community.

This credibility and authority is called a Value Score and is made up of many different things including; the number of posts, likes, time in the community, etc. If the combined Value Score of the people reporting the abuse is higher than the person who has posted the allegedly abusive or spam content, then the post will be automatically removed. If the combined score of the people reporting is lower then the content will remain until it has been reviewed by a moderator and this will follow our standard Moderation escalation process.

To help prevent spam and abusive content from appearing in the first place we have the following things already in place:

1. Registration security (reCaptcha) - which prevents automatic services from registering and posting. Unfortunately, the recent activity is not automated and is actually individuals creating accounts and posting spam content. We could put further measures in place to prevent new registrations from posting immediately, but we feel this would impact those legitimate new users from becoming active.

2. Content filters - we do have filters in place that prevent abusive language or content from being posted and we will continue to monitor these filters to ensure they are as robust as they can be.

We are also reviewing the SPAM filters, which are able to recognise a certain type of SPAM-like behaviour and prevent the post from publishing in the first place.

We hope this update gives you some confidence that, with the limited resources, we are doing all we can to ensure moderation happens as quickly as possible.

Many thanks for your support and understanding,

Elliott