Student Voices: Need for More Transparency in Japanese Content Filtering

Everyone who has used the Internet is aware that sometimes we cannot access a page when companies have blocked it as containing “illicit” content. However, how is this done and who decides what is dangerous content and what is not?

Of course there are some sites that need to be blocked: sites that give children gruesome instructions on how to commit suicide or child pornography. However, if these sites are blocked on the internet, who knows what else is being blocked? In Japan, telecommunications companies, not the government, are the ones who pick and choose those sites which should be blocked. When blocking a site, they review content and also take requests from teachers or parents of children who are complaining of the inappropriate content available on the web.

There are also other ways of blocking content on the Internet. France bans Nazi literature from the Internet. This is the work of the state: therefore, are all states equal? Should China have control over the Falun Gong, a group that the Chinese called a “ superstitious, foreign-driven, tightly organized, dangerous group of meditators” that are a “menace” to society? Other ways of restricting content include banning websites that threaten ICANN’s standards of “morality and public order” and computers substituting for human review, checking for inappropriate content on the Internet automatically by means of contextual word analysis, flesh tone analysis or the maintenance of a database of categorized Web Sites. The most prevalent and accurate form of content control is through the maintenance of a database of categorized websites.

Since 2009, Japan has filtered all mobile phone traffic for minors, and the Content Evaluation and Monitoring Association (EMA) was organized to create and maintain a set of “black” and “white” lists. The question is, how do governments or third parties, like the EMA, choose what the public is seeing and what they cannot? How do we know that content is inappropriate for minors when there is no public review process for the list? You may argue that the genius of the Internet is that we do not need permission to reach out to the world and that it is set up for everyone to use freely. However, if there is content that threatens our values and society, should there be a role for regulation? This brings us back to the debate of who should be responsible and what regulation they would be charged with.

Although I am quite aware of the dangers of inappropriate content on the internet and I do not want to see people get hurt because of the content that they see or read on the internet, I think that telecommunications companies and third parties should be clear on what they are blocking on the Internet. If the content that they are filtering on the Internet is made up solely of inappropriate sites that should not be seen by minors, then there should be no problem bringing this to light. Mobile operators and telecommunications services companies are very opaque as to how filtering works and in informing people what counts as ‘blockable’ content.

The Internet is popular because it is a free platform where people who are non-professionals can reach out to the world and share their ideas and opinions. Users of the Internet should also be informed as to what is not allowed and what is being blocked on the Internet for the sake of being safe and for transparency amongst themselves and their governments.