Who is the Spam King?

By: kent

25 May 2011

Canada is widely recognized as a hot-bed for spammers. In 2008, we ranked 5th in the world for total volume of e-mail spam, finishing behind only Iran, Nigeria, Kenya and Israel. Last year, Montreal-based "Spam King" Adam Guerbuez was fined a record $1 billion fine by a U.S. court.

Hopefully, the situation will vastly improve in the near future. Last December, the government finally passed a much-overdue anti-spam bill, An Act to promote the efficiency and adaptability of the Canadian economy by regulating certain activities that discourage reliance on electronic means of carrying out commercial activities, and to amend the Canadian Radio-television and Telecommunications Commission Act, the Competition Act, the Personal Information Protection and Electronic Documents Act and the Telecommunications Act.

"The what?!!", you may ask. As an interesting aside, this act has no official short title. Every other Act that refers to it in the future will have to use this lengthy, unwieldy name. The reasons have everything to do with Christmas.

This bill was first known as the Electronic Commerce Protection Act, but it died on the order paper when parliament prorogued for the holidays in December 2009. When reintroduced last year, the government changed the title to the sensational U.S.-style name of "Fighting Internet and Wireless Spam Act". Reportedly unimpressed by the change, the Industry Committee members approved the entire bill except for the short title. Then, rather than follow a lengthy process to introduce a new short title--and with Christmas once again looming--the committee, House, and Senate all passed the bill as it was.

Now, back to the topic of spam. The law is finally updated, but I remain skeptical that market practices in Canada have yet changed. So I'm going to find out.

To conduct my investigation, I'm using a little-known feature available on most e-mail servers: the catch-all address. By configuring a particular domain name to place all emails in a catch-all inbox, every email sent to the domain will go to this single place. Whether the domain is bob@domain.com or almagwatchi_3141592554@domain.com, the e-mails will all end up in the same box.

By creating accounts on various internet services and using a unique e-mail address for each, I can track the ways that each address is used, sold, shared, or inadvertently exposed. The "from" header in each message will indicate the internet website or service originally responsible for sending or sharing each e-mail sent.

I'll simply tally up the numbers of e-mails that arrive, keeping tracking of both "proper" e-mails that originate from the site's own domain and spam e-mails sent by other internet sites. For each site, I'm signing up two accounts: one that refuses all e-mails and messages and another that consents to all of them.

I'm starting with the top websites in Canada, as listed by Alexa, but let me know if there's a suspect site that you would like me to add to the test!

I recently posted about a privacy project proposal I submitted to the "Ideas for a Better Internet" project by Harvard and Standford Law and Computer Science students. This great initiative is now seeking further public input and comments on how the group should start about re-engineering the internet:

  • [W]e're announcing our Call 2.0, where we're asking the world for feedback to help develop the ideas for a better Internet. We've selected and consolidated the ideas that most resounded with our team, gathered and divided them into topic groups, and posted them on our new website: http://www.i4bi.org.

There are a lot of great ideas here, but here's a quick rundown of a few of my favourites:

1. Mesh networks. Essentially, a mesh network is an internet without the top-down architecture that currently allows governments--such as the recent regimes in Egypt and Tunisia--to hit the internet kill-switch. An organization called the Open Technology Initiative proposes "an open source 'device-as-infrastructure' distributed communications platform that integrates low-cost and preexisting, off-the-shelf devices, such as users' existing cell phones, WiFi-enabled computers...".

A P2P internet architecture that uses existing devices is absolutely key at this stage in trying to get a viable mesh network off the ground. The most likely early adopters are persons in areas afflicted by oppressive regimes, and in areas unable to afford traditional internet access. In both of these regions, any requirement for new and different technologies would be either prohibitively expensive or simply not permissible by the government.

2. User Rating of Certificate Authorities. The internet's current system of Certificate Authorities, which we all rely upon to certify our secure https connections, are problematic in many ways. First of all, purchasing a certificate is expensive. Prices can easily run into the hundreds, or even thousands of dollars. This creates a significant barrier for small websites and individual webmasters to secure their sites. Secondly, the trust worthiness of even a few thousand dollar certificate is sometimes highly questionable.

Crowd-sourcing the validation and verification of a particular server's trustworthiness could make the certificate authority system more secure, and free!

3. Last, but not least, I *happen* to be a fan of my own proposal that has made the initial cut: ISP level privacy protections, which I've previously discussed here.

Remember that the crews at Harvard and Standford are actually going to start working on a couple of these innovative ideas! Comments are most welcome on the feasibility and possibilities for both my proposal and the others.

As I begin to wind down the Nomus project (see my comments here), I'm beginning to look at some smaller projects to keep my hands a bit dirty with software code. Although I have my feet more in the legal world these days, I still absorb myself in bits and bytes during my free time. I have a few ideas...and would appreciate any comments or offers to join these efforts!

One idea I have is the development of a tool to allow users to easily manage privacy settings across multiple social networking platforms.

I think a great start would be to branch Creepy, adding functionality to allow a user to remove their sensitive location information. For readers that haven't heard of Creepy, it's an aptly named tool that fetches and aggregates location data on any person. It searches and parses geodetic metadata from any user's posts to Twitter, as well from any photos posted to a variety of other websites such as Flickr. As the Office of the Privacy Commissioner warns, "Creepy can harvest data from a dozen of the most popular photo hosts...then illustrate any found location data with Google Maps. The result is a visual cluster of your usual whereabouts: your favourite park, your place of employment, or your home."

Another idea I've been thinking about for some time is to implement the law as computer code. I was excited to read last week on Slaw that a group from Stanford’s Center for Computers and Law is starting to work on exactly this project, called Hammurabi. They describe the aim of their effort as follows:

Though not often thought of this way, law is inherently computational. It is a set of algorithms that prescribe how various computations are to be carried out. What is my standard (tax) deduction? Am I eligible for family and medical leave? On what day did I become liable for unemployment taxes? Determinations such as these are like mathematical functions: given various inputs, they produce corresponding outputs.

The Hammurabi Project provides a vehicle for representing portions of the law in an executable format, so that the process of logical inference can be offloaded from human to machine. Once executable, it can be embedded into our computing infrastructure where it can drive other applications.

I envision a project of this sort being very useful in helping self-represented individuals to find the key issues to a legal problem. After the logic of legislation and case law is codified, a user interface could easily ask a user pertinent questions that would collect the relevant facts and, after applying these facts to the law-as-code, drill down to the legal issues that arise.

It looks like the Hammurabi project is just starting out, and it'll be interesting to see what comes out of it in the future.

As I'm particularly interested in a similar implementation of Canadian law, hopefully their initiative will develop some useful tools. For now though, after a brief look of their existing project, I would be a bit hesitant to entirely adopt their current approach for a Canadian version. Rather than using C# classes to describe the law, I think it's highly important that the code structure follow the structure of legislation, rather than vice-versa. With a more flexible language--perhaps Ruby--the data structures that are to reflect the legal text could be defined after each provision, in a document that containing both the code and the legal text. This way, as legislation changes or as case law modifies and adds to existing legal rules, the rules will be easy to modify. In fact, each new case could simply be another source file that modifies the existing rules-as-code.

If either of these ideas sound interesting to you, or you have any comments, send me a line at kmewhort@gmail.com!

Ideas for a Better Internet

By: kent

8 Apr 2011

I came across this call for proposals for "Ideas For a Better Internet" today. A group of Stanford and Harvard Law and Computer Science group are soliciting submissions on ways to "make the Internet more secure, more accessible, more open, or just plain better" -- and then they're going to try to make the best ideas happen!

I'd encourage every who has any ideas to jot them down and send them in (it's only 350 words max). It's a great opportunity to hopefully get a couple innovative projects off the ground.

I have one recent idea on this front -- I think many benefits could be realized by middleman ISP's taking a more active role in privacy protection. Here's what I submitted:

Recent innovations in web browsers such as Firefox and Internet Explorer are starting to address some of the growing obstacles facing privacy on the internet. Regulatory possibilities, such as a do-not-track protocol that relies on advertising agencies respecting users' privacy wishes, may also help mitigate increasing concerns. However, some of the best privacy protections might come from the middleman -- ISPs.

A few examples of privacy protections that could be implemented at the ISP-level are as follows:

  • Removal or falsification of geodetic information from any images uploaded

  • Removal of "tracking" information, such as third-party cookies by advertising agencies (perhaps through the use of a community-maintained filter list)

  • Automatic redirection to HTTPS services, where supported

The major benefit of this approach from a technological perspective is the inherent platform independence (from the perspective of an internet customers). With the proliferation of alternative web browsers such as those on mobile devices, it is not always possible for users to take advantage of browser-based privacy protections. An ISP-based solution would work for all devices.

ISP-level solutions are also advantageous from a policy perspective. Conflicts of interest often arise in the actual implementation of browser-based tools. For example, Microsoft owns a subsidiary advertising agency, which may already have resulted in watered down privacy protections in IE.

If there is ever to be regulation of privacy, ISPs are also the ideal candidate to bring legal requirements to fruition. Whereas jurisdiction will always be an issue for browser software developers and for advertising companies, who can be located anywhere in the world, an ISP is always local to the jurisdiction of the internet user.

As far as implementation goes, ISP-level filtering would involve a web-based interface and underlying filtering/firewalling technologies (which could be based off of existing client-level solutions). It may also be possible to leverage existing filtering technologies already used by ISPs for traffic shaping.

If implemented, this will be advantageous over existing privacy technologies in its platform compatibility, the lower level of conflicts of interest, and the feasibility of regulatory enforcement.

Canada's Internet Kill Switch

By: kent

17 Mar 2011

With the recent shutdowns of the internet in Egypt and Libya, as well new proposals for new U.S. legislation on the matter, there's been a lot of talk of "internet kill switches" over the past few weeks. I think it will be fruitful to discuss the legal and technical side of how Canada's government could, hypothetically, pull this big red switch on us using its existing powers.

Introduction: The U.S. Kill Switch

I became keen to look into this issue after it came to light that the U.S. already has kill-switch legislation in place under the Communications Act of 1934 (s. 706(c)) (in the case of "war or a threat of war, or a state of public peril or disaster or other national emergency, or in order to preserve the neutrality of the United States" the President "may cause the closing of any station for radio communication").

New U.S. legislation, entitled the Cybersecurity and Internet Freedom Act may also contain kill switch powers, and is now being hotly contested. It was during debate over this new bill that Liberman, one the bill's sponsors, pointed out that the Act would actually limit the existing powers of the President, not expand them. Comments by the senate committee communications director of the Homeland Security appear to confirm this interpretation, as she made arguments on the grounds that it the bill would “replace the sledgehammer of the 1934 Communications Act with a scalpel.”

It's worth noting that although the powers in the new U.S. bill may be more circumscribed, the prescribed circumstances for its invocation may be wider. Whereas the former Act is directed at a war or national emergency, the President could invoke the new powers upon any "cyber risk...to the reliable operation of covered critical infrastructure".

Canada's Kill Switch Legislation

Now, turning to Canada's legislation, it appears that our executive government, like that in the U.S., has a kill-switch authority grant from an old act. First, looking to Canada's analogy to the U.S. Communications Act, it appears that the executive's powers are actually quite circumscribed under our Telecommunications Act, S.C. 1993, c. 38. The executive is only granted moderate regulatory authority, such as the ability to review CRTC decisions and to direct this body on policy matters. However, analogous provisions to the U.S. kill switch are still to be found in the Emergencies Act, R.S.C. 1985, c. 22. Several provisions permit the government to control and shutdown key internet gateways.

Under a "public order emergency" in the Emergencies Act, the government may direct "the assumption of the control, and the restoration and maintenance, of public utilities and services" (s. 19(1)). In an "international emergency", it may more generally order "the appropriation, control, forfeiture, use and disposition of property or services".

Interestingly, the declaration of an "international emergency" -- where there is a threat of a conflict with another country -- comes with a s. 30(2)(b) stipulation that the powers "shall not be exercised or performed for the purpose of censoring, suppressing or controlling the publication or communication of any information". Thus, this section cannot likely be invoked as a kill switch. Per contra, no such limitation exists in the case of a declared "public order" emergency. The Governor in Council may have an internet kill switch available whenever it decides that there is "an emergency that arises from threats to the security of Canada and that is so serious as to be a national emergency" (s. 16).

Canada's Kill Switch Implementation

Now, on the technical side of the matter, there are three key ways that a government can throw off an internet kill switch. The first method, widely believed to have been deployed in Egypt, is the manipulation of what is called the Border Gateway Protocol (BGP). This is essentially a partial map of the internet held by each major router, telling it where to forward and direct internet traffic passing through it. By forcing each ISP within Egypt to delete all of BGP routes pointing within the country, the state simply dropped off the internet map and became invisible.

The second way, which some reports suggest was the actual mechanism used by the Egyptian government, is to simply cut off all traffic where it enters and leaves the country. In Egypt, breakers may have been thrown at the two key Internet Exchange Points (IXPs), Ramses and Cairo.

The third method, which was reportedly used in Libya, is to throttle traffic at each ISP to the point of unresponsiveness. During the Libyan blackout earlier this month, internet servers appeared to be alive with all of their routes intact. Some users were even still pingable (albeit slowly). It appears that the government simply ordered their single ISP, LT&T, to slow down bandwidth to a near standstill, making the internet unusable.

In Canada, all three of these methods are technically feasible. Assuming that ISP's fit within the terminology of a "public service", the Canadian government could invoke section 19(1) of the Emergencies Act to take control of Canada's ISP's, forcing them to make the necessary changes to their BGP routing tables. There's only a handful of major ISPs in Canada that the government would have to requisition for this task (keeping in mind that smaller ISP's such Techsavvy simply purchase bandwidth from the major players).

For the government to deploy the second method in Canada, it would need only to send marching orders to 151 Front Street W in Toronto, home or the major TorIX IXP. From here, it could put a stop to the majority traffic entering and leaving Canada. Admittedly, this might not be as effective as the first method, as many servers could route around the outage -- but it would still take down a major portion of the Canadian internet for some time.

The third method of throttling the internet would, of course, be similar to the first. It would only require control of Canada's few major ISPs such as Bell, Telus, and Rogers.

The flip of the internet kill switch is legally and technically possible in Canada.

The Cellphone Kill Switch

I'll add, as a side note, that the shutdown of cell phones may also be possible in Canada. The Radiocommunication Act, R.S.C. 1985, c. R-2, under a section title "Possession by Her Majesty", provides that:

  • 7. (1) Her Majesty may assume and, for any length of time, retain possession of any radio station and all things necessary to the sufficient working of it and may, for the same time, require the exclusive service of the operators and other persons employed in working the station.

A "radio station' under the Act is broadly defined and certainly includes all cellphone base stations. Although the provision does not specifically mention the power to close a station, the term "possession" generally entails a wide open set of powers to use or stop the use of the object under possession. Thus, the Canadian government likely also has the present technical power to shutdown cellphone communications.

Internet as a Fundamental Right

Overall, let me be clear that I'm not trying to monger fear. The chance of our democratic government invoking these provisions to shutdown the internet, in a similar manner to Mabarack, is just about nil. However, it's important to protect our liberties from erosion. With internet access starting to be recognized as a fundamental human right, legislation such as the Emergencies Act needs to be clear in defining limitations that no government or official has the authority to block the internet.

The Fake Social Web

By: kent

19 Feb 2011

The practice of "astroturfing", where corporate entities disguise their lobbying efforts as grassroots campaigns, has existed in Canada for some time now. For example, last year during the campaigning efforts around Bill C-32, it came to light that a website having the appearance of a grassroots effort was, in fact, led by the Canadian Recording Industry Association (CRIA). The site whole-heartedly attempted to obscure its origins, even going so far as to erase its original list of members after this came into the public purview.

Astonishingly, it appears that some astroturfing efforts are now going much further than the mere creation of misleading websites. After the hacking group Anonymous leaked e-mails from U.S. security contractor HB Gary, it came to light that the firm's lobbying practices even included the use of customized "persona management" software to create and manage fictitious personalities (often called "sock puppets") in online spaces.

One of the e-mails describes their intention to "create a set of personas on twitter,‭ ‬blogs,‭ ‬forums,‭ ‬buzz,‭ ‬and myspace under created names ...‬These accounts are maintained and updated automatically through RSS feeds,‭ ‬retweets,‭ ‬and linking together social media commenting between platforms.‭ ‬With a pool of these accounts to choose from,‭ ‬once you have a real name persona you create a Facebook and LinkedIn account using the given name...Using the assigned social media accounts we can automate the posting of content that is relevant to the persona."

Imagine a man coming up to your door, asking if you have a moment to talk, and then giving you a pamphlet on an particular issue. A few minutes later, a women comes up to your door and gives you a similar pamphlet, but from a completely different organization. This women is followed by another person supporting the same issue, then another. This scenario is exactly what HB Gary was doing, except that with the use of Facebook and automated tools, they could knock on your door much, MUCH easier and with greater frequency.

Unfortunately, there is little legal protection against these deceptive practices. Some forms of astroturfing are prohibited by consumer protection legislation. For example, s. 14(2) of the Ontario Consumer Protection Act prohibits "[a] representation that misrepresents the purpose or intent of any solicitation of or any communication with a consumer." This would likely apply to any business posting false reviews or comments on its products. However, it's difficult to catch businesses in the act of these sly acts, and even more difficult to enforce it.". Moreover, such consumer protection legislation only applies to commercial transaction, not to political lobbying.

It would be nice to see social networking sites such as Facebook and Twitter step up to the plate and take legal action against astroturfing. The creation of fictitious accounts is clearly against their terms of service; a lawsuit based in breach of contract is feasible. Clearly, these practices damage the trust that people have in the sites and thus damages their goodwill. Additionally, these sites are in the best position to detect suspicious activity in the creation of fake accounts and social networks.

I first wrote about the possibility of a "Do-Not-Track" protocol for the web a few months back when the idea was only a brainstorm. Since then, the idea's been gaining a lot of traction, particularly in the U.S. It's arrival will certainly be a positive step forward for protecting privacy against the growing behavioural advertising market, but it may not in itself be highly effective without further regulation, due to it's jurisdictional scope remaining limited to the the U.S.

FTC chairman Jon Leibowitz unequivocally strongly endorses this idea as a great way to prevent "unauthorized Web snooping" in the online world. Congress has already held a hearing on the matter. We very well may see U.S. legislation or regulation on this issue in the near future. Most likely, the implementation of a "Do-Not-Track" protocol will be in the form of an HTTP header. A user would flag on a "Do Not Track" option inher browser settings and then, with each request to view a webpage, the browser would tell the server not to collect information.

Of course, this whole system is dependent on webites respecting your request for them to respect your privacy. Major websites are unlikely to do so voluntarily, given the massive financial stakes in the behavioural advertising industry. This is where regulation comes in. If a this do-not-track idea comes to fruition, U.S. regulation would probably mandate that websites abide by do-not-track headers.

However, in this case, I suspect we might see a significant exodus of servers to outside of the U.S. Actually, most websites would not have to themselves move their servers to escape the purview of do-not-track regulation -- only the advertising companies would. Whenever you see an ad online, it's almost always pulled directly from an advertising company itself, not from the website you're viewing. If the advertising server was outside of the U.S., it would be outside of U.s. regulation, and without any sure way of even telling if you were being tracked or not.

Perhaps an international treaty could help stop this problem in it's tracks, or least international co-operation in the form of other jurisdictions such as Canada introducing similar regulations. However, this would be years in the coming. Rather, I think the problem might best be solved by regulating internet service providers rather than the end-point websites. ISP's are in an excellent position to help gatekeep users' privacy.

In most cases, cookies that track users are sent as HTTP headers in plain-text format. Thus, it would not be computationally burdensome for ISPs to clear cookies from the relevant data packets when users are using the do-not-track header. Likely, this task would require less computing power than ISPs currently employ to shape traffic during peak usage hours.

Interestingly, in Canada, the CRTC already has full authority to implement such a policy. They have wide discretionary powers over regulating ISPs and, moreover, are also mandated under the Telecommunications Act with protecting the privacy of Canadians.

Since the 1990's, this objective has been somewhat neglected by the CRTC. It's rarely been invoked since it was used to create a Do Not Call list for telephone advertising years ago. However, this objective should certainly be given a second look as privacy becomes more and more of an issue. Although the CRTC has been backing off of regulating ISPs altogether in their attempt to rely on market forces, it is very clear by now that the market cannot and will not itself respect the privacy of internet users.

Merry Christmas and happy winter solstice, everyone. It has been a busy last few months as I've been wrapping up my law degree, but now, after a few days of relaxation, I'm glad to find the time again to blog out some of the unreleased thoughts that have been building up inside my head.

I've been working on a few open data projects lately in an effort to prod along the process of getting the city of Montreal to open up it's data, as most other major Canadian municipalities have already done. At a recent open data "hackathon", where I was working on hacking together some scripts for extracting Montreal election data for open public use, it was interesting to hear some very differing perspectives on whether data found on the internet people was "available" for use in other projects. Views seem to range from a cautious approach of only using data explicitly released under an open licenses, to a view that "if it's up [on the internet], we can use it". Of course, as to be expected under the prevalent libertarian hacker ethos, perspectives tended towards the latter.

The legal reality is somewhere between these two extremes. As I'll discuss, there are three legal mechanisms that restrict the use of data on the internet: contract law, privacy rights, and copyright.


If you have to create an account and login in to a website, you are clearly bound by the terms of use set by the host website. You are only permitted to access and use their data in the ways they stipulate.

These rules are set out in the text preceding the "I Agree" button that you clicked without thinking twice about when you created an account. By clicking an agree button, you enter a binding contract with the website -- it's the equivalent of signing your name to the contract in ink.

Of note, typical terms of use may often bar you from using automated mechanisms to download data or from redistributing any data you obtain from the site. Terms may also put stipulation on the way you reuse or redistribute the data, such as perhaps requiring attribution. It's a good idea to at least give any terms of use a quick read through to check whether these prohibitive terms exist.

Keep in mind that, as with all contracts, these terms only bind you and do not attach to the data itself or have any impact on anyone else (this is called "privity" of contract). For example, if some terms of service prohibit automated downloading and you manually download it from the site, you can still openly provide the data to others and allow them to download it from you via automated scripts.

For sites with no login or click-through contract, the enforceability of any terms of service posted on the website is questionable. Contracts only accessible via a hyperlink must be reasonably brought to the user's attention. It is unclear if a link to the terms of service at the bottom of a webpage is sufficient to meet this requirement and there is little Canadian precedence on the issue. However, it certainly wouldn't be advisable to test the boundaries here where you're reusing data in your own project. If you're redistributing the data, there's clearly a higher expectation that you'd reasonably familiarize yourself with any terms of use posted.

Also note that contract law is aimed at compensating the other party for any damage caused by any breach of terms. The general rule is that you must compensate the other party for any damage that the breach causes. If you're using data you've collected to directly compete with a commercial website that you've extracted data from, these damages could be substantial. On the other hand, if you're using government data for a non-profit project, it's unlikely any damage at all would be caused. In this latter case, the government could still likely obtain an order for "specific performance" that would prevent you from continuing to breach the terms of use.


Copyright subsists in all "literary works", which can be even a single sentence. Where a work is covered by copyright, you need to be particularly careful about any reuse or redistribution. Unlike contractual terms, copyright does attach to the work itself. Even if you obtain a dataset from someone who imposes no contractual terms on you, the original author (or anyone to whom she has assigned tho copyright) maintains copyright. You need permission (a license) to use the work.

Importantly, copyright law does not cover "data" Unfortunately, the scope of "data" is a gray area of the law. Geographical points are certainly within the scope of data, but a map in which a cartography has exercised "skill and judgment" is covered by copyright. As a ballpark rule, any numbers and tables are likely free of copyright; any text containing sentences, or any drawings, are likely covered by copyright.


In most cases, you probably won't run into much trouble with contract law or copyright when using publicly posted government data for non-commercial purposes. However, privacy rights are extremely important for everyone to keep in mind, whether working on a project for commercial or non-commercial purposes. Respect for privacy rights is obviously important not only for commercial interests, but for protection of people's private lives and their dignity.

If you "collect, use or disclose" information for commercial purposes, this raises your obligations to protect privacy to an even higher level, as provided by the federal Personal Information Protection and Electronic Documents Act (PIPEDA).

All in all, you'll notice a sliding scale of protection in all three of these legal regimes as you go from personal use, to non-commercial use, to commercial use. Try to always get explicit permission for your particular use of data from the data distributor, as well as from the original author/copyright holder. Where you haven't obtained explicit permission, it's a good idea to give each of these legal controls careful consideration.

Today, the unfortunate reality of access to primary legal materials in Canada is that we're granted only a restricted "right to know" about our laws -- even though these laws impact every aspect of our lives. The last few years have seen an important broadening of what I call "first generation" access; this allows us to view our laws for the purposes of reading them. Websites such as CanLII and governmental court sites play an important role in providing this type of access. However, while useful to legal practitioners and scholars, such access rights are limited in their capacity to help the general public.

Second generation rights, as promoted by the open access movement, allow us to do more than simply read documents. When legislation, case law, and other documents are released with few restrictions, citizens are able to build upon the data. We can create tools to enhance their usability and usefulness for everyone. Researchers can analyze the data in new ways. Communities can develop around building and analyzing legal materials and tools.

As espoused by Law.Gov, legal materials must be available with as few "limitations on access through terms of use" as possible, be made available "using bulk access mechanisms so they may be downloaded by anyone", and be "distributed in a computer-processable, non-proprietary form" (amongst other important principles).

Aside from my own project, which I'll explain in a moment, I'm quite certain there's not a single case law website in Canada that abides by any single one of these principles. All existing websites put unnecessary restrictions on access. Many sites firewall access after downloading only a few dozen judgments. No service provides machine-readable metadata with information such a judgment's style of cause and judgment date.

This clearly needs to change, and it is a primary motivation behind Nomus.ca, the legal search engine for Canadian caselaw that I develop and maintain. Alhough many judgments on Nomus are still restricted by their original copyright, I'm attempting to at least clear them from some terms of use, provide mechanisms for bulk download, and provide machine-readable metadata.

In pursuit of kick-starting and increasing these second-generation access to law rights, and in celebration of Right to Know week (this week), I'm pleased to announce the launch of a new platform for Nomus.ca.

The new platform is built upon the popular open-source Drupal system. This should make it is much easier to continue to add exciting new features! Additionally, for anyone interested in helping to build upon this platform, please watch out for Nomus on GitHub within the next few days. I will be licensing the whole platform as open source software; I welcome and encourage any development contributions!

For those interested in details on the hurdles and data issues I encountered in developing Déchets Montreal, an interview I had with Montreal Ouvert (version française) gives further details on the development process.

Also note that I've posted the full source code for the project online at GitHub.