You are browsing the archive for Featured.

The Belgium leaps forward on the national Open Data Index

December 10, 2015 in Featured, Open Data News

Open Knowledge International just launched the final 2015 index, ranking 122 countries based on the openness of 13 key datasets. In this ranking Belgium jumped from the 53rd place to 35th place and went from being 39% to being 43% open. What does this mean and how will this affect open data publishing in the future?

It means the federal government takes off

In last years blog, we had to put things in perspective on Why Belgium ranks so low on the Open Data Index. One of our outtakes was that open data publishing was very active in local cities and regions, but not so much on a federal level, which is vital to a decent score on the index. That’s because said index mainly focuses on national efforts and datasets. The Belgian government took notice of this during the Open Belgium Conference 2015 and committed to working towards a strategy to overcome this.

Five months later, an Open Data strategy was approved by the federal ministerial council defining the baseline of a more Open Belgium.

Statbel takes the first leap in the dark

After publishing that strategy during the summer, the first noticeable changes happened. Statbel, the federal department responsible for the national statistics was the first one to announce that their data would fall under an open license and has now opened the first datasets, with more coming soon. This is great news as ‘national statistics’ is one of the key datasets on the open index. So normally, this dataset should receive all points on the index of 2016, given that the right data is open.

We’re not quite there yet

There is still a long way to go though, if we ever want to tie with our neighbouring countries. All 13 key datasets focus on very different departments from government spending to weather forecasts. And to make matter more complex, datasets like water quality and pollutant emissions are not a federal responsibility, but a regional one. So even if for example the Flemish, Brussels or Walloon government scores very high on one of these datasets, only if all regions have the same data available will we get to score positive on this matter.

Since Statbel took the first steps and a federal Open Data strategy is on the table now, we’re having high hopes for 2016. Even though we’re not quite there yet, and still lagging behind neighbouring counties, a truly Open Belgium is approaching. We’re looking forward to all Open Data efforts in 2016.

Escalator” by Unsplash is licensed under CC0.

Open Call for Open Presentations

October 7, 2015 in Events, Featured

Do you want to contribute to the Open Belgium 2016 conference in Antwerp? Submit a proposal in our Open Call, you have until the 15th of November 2015 to propose a session.

Nostalgeo, Streetview of the past

September 22, 2015 in Featured

Did you ever wander through your hometown, pondering what everything looked like, let’s say, one hundred years ago? People tend to be interested in the local history of their hometown. That’s why Nazka rolled out nostalgeo, which merges history and present together in a visual way. To put it shortly, it’s the Google Streetview of the past.

Nostalgeo is an online web app that gives you a peek into the past. It reveals history in a fascinating way. Rather then just showing old pictures of your hometown, it combines them with Google Streetview. That way, the old photograph forms a kind of overlay that situates the past in the present. And thanks to the slider, you can easily compare the past and present.

Open up regional history

All photographs, postcards,… uploaded on the site are Open Content. Digitalising such pictures prevents them from being forgotten or lost. Instead, they can be used for various cases. At the moment, most pictures are situated in the Waasland, and more specifically in Sint-Niklaas. That’s because Nazka worked closely together with the Oudheidkundige Kring Waasland, who possess a great deal of historical postcards of said region. Students of SBSO Baken put all pictures carefully in the right place. But nostalgeo plans to map the whole of old Belgium. In order to do so, anyone who owns old photographs and postcards can easily upload them on the site and place them on the right spot. An ambitious project, but at Nazka, they have faith in a country wide nostalgeo. And who knows, with the right assets, nostalgeo might one day become the historical Streetview of the world.

Open Data Day Flanders 2015

May 11, 2015 in Events, Featured, local governments, open data

Open Data Day

The Flemish Government claims to have attached more value to Open Data these  past years. But those words don’t have much value if they aren’t combined with any acts. But what has the Flemish Government been doing in the Open Data arena? Well, that’s something you can find out at Open Data Day.  And more importantly, maybe you’ll have the chance to give your opinion. What’s even more, you can participate in Open Data Day completely free.

The nineteenth of June, Open Data Day -the fourth edition already- will take place in the Boudewijngebouw in Brussels. This year is all about transparency and services. And in order to get the most out of Open Data, citizens and companies will be heard too. How? By making use of round-table discussions. All participants can debate on a specific topic. In this way, the Flemish Government can collect ideas and thoughts, in order to work out an Open Data policy that has a widespread support.

Experts and citizens combine powers

There will be five discussions in the morning and five in the afternoon, each one handling a different subject. Each debate has twenty participants, of which sixteen are considered to be ‘experts’. Those experts are derived from the public sector, the industry and civil organisations. The remaining four participants are citizens, two men and two women. If you want to take part, you can do so by registering yourself on the Open Data Day website. Participating in this is free of charge.

Realising ideas

There will be five subjects to discuss, being mobility, environment, economy, statistical data and geographical data or geodata. Through the round-table discussions, the most valuable concepts will come forward and in that way, the Flemish government knows which ideas are interesting for both the governmental side and the citizens side. Those ideas will be followed up and if possible realised in 2015 and 2016.

Remainder of the day

The remainder of the day is full of interesting keynotes and presentations, most of them about Open Data projects that are already up and running. You can find the programme here.

In short, this day is an absolute must for everyone whose interested in of involved with Open Data. So, why don’t you register right away?

A quick guide through Open Science

February 17, 2015 in Events, Featured, Openbelgium15, OpenBelgium2015

“Even if the open windows of science at first make us shiver after the cozy indoor warmth of traditional humanizing myths, in the end the fresh air brings vigor, and the great spaces have a splendor of their own.” Bertrand Russell

Note for all you TL;DR people: Check out this clip about Open Access by PhD comics.

Open Science

Open Science can be described as the movement aiming at integration of ‘open’ workflows in the whole research cycle: from the actual research, to publishing research results and data.
During this session, we will mainly focus on the publication of research results (publications and data) – and to try to make these as broadly accessible as possible for as many people as possible (Open Access to research).

Based on processes and workflows already firmly established in other areas (such as software development), researchers have become increasingly aware that they are not operating in a vacuum – and that their research can reach a much wider audience than only their direct peers. Especially for the born digital generation, the possibilities for disseminating their work are no longer aligned with what the traditional research publication system (based on digital versions of paper journals, their ranking, high subscription prices and strict copyright restrictions) has to offer. On top of this, there is also an access problem – perhaps not that obvious when you’re affiliated to a research institution that can afford expensive journal subscriptions (even then it’s sometimes problematic!), but very clear when this is not the case (think about journalists, health professionals, teachers, independent consultants, SME’s, but also many researchers in the developing world …). The Open Access movement has tried to fix these issues following two, complementary, routes: encouraging researchers to deposit digital versions of their work in Open Access archives (‘repositories’) and reforming the scientific publishing system – encouraging existing and new journals to ‘go Open Access’ and don’t charge readers anymore to read the articles. This has been a relatively successful process: 5 years ago, at best 8% of all research was available in some sort of Open Access. Anno 2014, this number is up to 50%.

There were (and still are) some bumps on the way though: in some fields, Open Access awareness is still very low – and Open Access research is still often perceived as low-quality research (the fact that most Open Access journals are still very young has consequences for their ranking in traditional journal qualification systems). Copyright restrictions and strict licensing are still a big obstacle with a lot of publishers (a problem even more stringent when talking about research data and text and data mining).

Additional problems are caused by the so-called ‘article processing charges’ (APC) levied by Open Access publishers to compensate for the loss of revenue due to the abandonment of subscription charges. Ideally intended to cover publishing costs and to ensure economic viability of the publisher, some publishers charge unreasonably high APC’s – making ‘author pays’ Open Access a very interesting and profitable business model for scientific publishers.

The large scientific journals have found an at least questionable way to exploit Open Access commercially (‘hybrid Open Access’: charging APC’s for individual articles while not making the whole journal Open Access). Also, there has been a rise in low quality (sometimes even fraudulent) Open Access journals – charging high APC’s while not delivering on the quality standards expected by the submitting researcher.

Luckily, there are plenty of initiatives tackling these issues. Trying to do Open Access ‘the right way’ has become a subject of interest for plenty of publishers, researchers, library and research administration staff and policy makers. During this Open Science session, we’ll be hearing from 4 of them:

Bernard Rentier (Université de Liège and Enabling Open Scolarship) and Inge Van Nieuwerburgh (Universiteit Gent) will talk about the successful Open Access policy they have put in place at their respective universities: requiring researchers to deposit all their research immediately upon acceptance into the institution,s repository and providing Open Access to it as soon as possible. This policy model has been an inspiration for the very influential Open Access policies now in place at national and international level (for instance in the 80 billion Horizon 2020 programme by the European Commission). Inge will also address several of these national and international policies as well, figuring out if and how they affect Open Access adoption amongst researchers worldwide.

We’re also happy to have Brian Hole from Ubiquity Press on board: he will explain how his publishing company combines a fair business model with state-of-the-art publishing workflows.
And, last but not least, there’s Joseph McArthur. As a student he was one of the developers of the Open Access Button. Now graduated, he’s one of the most active and prolific Open Access advocates around, working for the Right to Research Coalition.

Of course, we are also counting on you. What are your experiences with Open Science? Do you have any questions for the panel? Don’t hesitate to contact me this week! Tweet, mail or send me a postcard.

(oh, and I am Gwen Franck. I work for Creative Commons as Regional Coordinator Europe, and for EIFL as partner in the European Open Access projects FOSTER, OpenAIRE and PASTEUR4OA ). Occasionally I also tweet for Open Access Belgium, a collaboration between UGent and ULg).

If you want to read more, check this:

Why do we need OpenStreetMap? It’s the community stupid.

February 10, 2015 in datadays2014, Events, Featured

There are exciting times. Opendata is everywhere! In the past couple of year we have seen a lot of very interesting data open up and as a result things have changed. Startups have popped up everywhere related to opendata and some very successfully.

OpenStreetMap celebrated it’s 10th birthday in 2014, it’s been around for ages when talking about opendata. A very relevant fact related to the topic of opendata is the fact that the project got started because of a lack of open geo data to experiment with. The question that then comes to mind is: Why would we need OpenStreetMap in a world where all (geo)data is open?

From an OpenStreetMap-community-member perspective the answer to this question is obvious; it’s the community, stupid!

In our session at the OpenBelgium conference in Namur we try to give you an inside view of our community and all of it’s different aspects and activities. We hope that those who attend our session will also consider our community as thé answer to the question of why the world needs OpenStreetMap in an open world.

OpenStreetMap is so much more than just an open geo database. If you are looking for new ideas related to geo, want to know more about OpenStreetMap or if you want to become part of our community make sure you don’t miss out and attend OpenBelgium!

Want to know more about the OpenStreetMap community? Come to the Open Belgium Conference OpenStreet Map session and find out what this community looks like and how members contribute.
Or follow the Belgian OSM Community on Twitter or their website.

Cover-image CC-BY-SA

From raw data to finely crafted mosaics: the importance of standards

February 3, 2015 in Events, Featured, Openbelgium15, OpenBelgium2015

Now that large amounts of open data are becoming available, along with efficient visualization tools for their respective types, one of the next challenges is to make sense of these data in the scope of particular domains and use cases. Be it enriching a breaking news video with relevant graphs, contextualizing a budget report with related public policy excerpts, or bringing city statistics to life with localized pictures, it’s all about finding the right datasets that bring sense to each others. A fair part of making that sense lies in the ability to discover the right data, deconstruct it and tie the fragments together in mosaics that carry more information than the sum of these elements.

On the path to data valorization, the first step is discoverability of data. While cataloguing tools and open formats are now becoming mainstream (cf. CKAN and its numerous public deployments), usage of open metadata standards is still lagging behind. Sometimes because of proprietary metadata structures that prevent cross-domain discoverability, more often because datasets lack proper metadata altogether. If the former is being solved by the emerging use of standardized vocabularies (DCAT, INSPIRE, to name a few), the latter is mostly a matter of raising awareness, in all data publishing bodies, that metadata is just as important as data.

The next step in data reuse is the ability to transform data to match the tools and frameworks where data is to be used. Having data in a open format is good, but there often exists multiple potential open formats for the same dataset, and each context of use comes with a set of tools that may support only some of them. CSV’s may need to be turned into KML, or XML into JSON. This is where on-the-fly data transformation tools such as The Data Tank come into play, and ease up data processing by removing format friction.

Lastly, real added value can be created by going below the surface of the datasets, i.e. by no longer consuming datasets as unsplittable entities, but rather chunking them, taking the relevant parts for the subject at stake, and stitching the fragments into meaningful data mosaics. Some standards exist or are emerging to tackle that problem, like URI Fragments, Open Annotations, and the whole Linked Data toolbox, but a complete stack for the authoring and publication of such mosaics is still to be produced. Once achieved, such an environment would allow anyone to easily deconstruct datasets, build contextualized data mashups and exchange them as documents on their own, while relying directly on the original, remote data sources.

Curious to find out more? Come to the Open Data Tools and Standards session at 13.30 in the Auditorium Félicien Rops, where we will discuss this further.

Overcoming the hurdles of Open culture in Belgium

January 25, 2015 in Featured, Openbelgium15, OpenBelgium2015

In January 2015, Paul Otlet’s heritage has fallen into the public domain. Some 80 years ago Otlet designed plans for a global network of computers that would allow people to search and browse through millions of interlinked documents, images, audio and video files. He described a networked world where “anyone in his armchair would be able to contemplate the whole of creation.

One of the online spaces where the whole of cultural creation can currently be discovered, is Europeana. A portal website guiding you to millions of old historical maps, pretty painted ladies, broadcasted art documentaries and much more. Should Otlet have lived to see this … but wouldn’t he then also want to go and do things with all the beauty he could discover there? Well probably, but then we would require all these pictures, these AV snippets and texts to be Open instead of (many) Rights Reserved.

He could then go on and do marvellous things with it, twisting the uses we’ve known so far, hell – perhaps even squeezing some money out of it. Open does not only pave the way for creative re-use, it also helps us know what really is the (digital) original. Not ringing a bell? Dive into the issue of the Yellow Milkmaid then.


Wikimedia would also benefit greatly of a bit more cultural content that’s CC-BY-SA licensed. And awareness on how to do it is increasing with the creation of GLAM-networks – Galleries, Archives, Libraries & Museums such as OKF’s offspring OpenGlam. Belgium, Finland and the Netherlands each ran an Open Culture Data programme so you might think that we’ve more or less arrived. Alas; there are still some hurdles to overcome. I will have to invest so much time and fees into rights clearing! Maybe someone else has a brilliant money-making idea with the content I freed up and I will see no revenue! How am I going to keep (any) control?

Should Paul Otlet still be here, he might go WHAT ?! – and not see any problems in why culture still has difficulties to be(come) open. During the Open Belgium conference, the Mons’ Mundaneum will talk about opening up Otlets’ own legacy through his museum archive. We’ll talk copyright, other hurdles and barriers, business opportunities and new possibilities on the horizon. Join us during the Open Belgium session!

Entrance bursary requests opened for Open Belgium 15

January 22, 2015 in Events, Featured, Openbelgium15

Organising a community conference such as Open Belgium takes up a lot of time, effort and money. And to cover for those costs we need to put a price on our tickets. But to ensure everyone has a chance to come to this event and discuss open knowledge in Belgium, we happily announce that we are providing 5 bursary tickets to students and people with no steady income. All you have to do obtain one of these tickets is to fill in the form below. After the deadline, we will assess who deserves these tickets and notify everyone who has applied.

Let’s meet at the SAI Data Summit Brussels

December 15, 2014 in Events, Featured

SAI, “Studiecentrum voor automatische informatieverwerking” or ‘studycenter for automatic information processing’ is organising SAI Data Summit Brussels on the 5th of March 2015. This is an event where Open data, Big data, Smart data, Linked data tools are presented on the same day. Because Data is the new oil for our economies and you better have the right toolkit at hand. During this event, SAI will present leading and cool tools to crawl, clean, convert, visualise and analyse data. And, we found this important, the tools are affordable for everyone. So this event is for all people interested in data: from data analysts, data scientists, data journalists, open data evangelists, innovators to hackers and more.

The Master of ceremonies is Louis Dorard of ‘bootstrapping machine learning’ fame. He will guide you through this event packed with interesting tools.

Location: Van der Valk Brussels Airport Hotel, Brussels
Tickets: Register here
Date: 05-03-2015 (13:15 – 17:45)
Language: English
125 EURO for SAI-Members.
175 EURO for non-SAI-Members

13.15 – 13.30 Registration

13.30 – 14.00 Tackling data: what can we do with data and which tools come in handy ?
Speaker: Louis Dorard. He is author of Bootstrapping Machine Learning and co-founder of, the International Conference on Predictive APIs and Apps. He is data consultant and partner at Codole. He studied machine learning at University College London.

14.00 – 14.30
Speaker: Alex Gimson, European Evangelist at is a service that turns any website into a table of data or an api. Web scraping on steroids.

14.30 – 14.50 OpenRefine
Speaker: Ruben Verborgh. OpenRefine is an open source tool for working with messy data: cleaning it, transforming it from one format to another. Ruben is researcher in Semantic Hypermedia. He works at the Multimedia Lab of iMinds, University of Gent. He is co-author of ‘Using OpenRefine’ published by Packt.

14.50 – 15.00 The DataTank
Speaker: Jan Vansteenlandt. The Datatank transforms any type of raw or binary data into machine or human readable (semantic) data and automatically provides a RESTful API on top of it. Jan is one of the co-creators and developers of the datatank software.

15.00 – 15.30 Datawrapper
Speaker: Mirko Lorenz. Datawrapper is an open source tool to create simple, correct and embeddable charts in minutes. Amongst others in use at ‘de Standaard’. Mirko is a journalist/information architect who conceived the idea for the project in 2011. He is co-author of the Data Journalism Handbook and a trainer for data-driven journalism.

15.30 – 15.50 Coffee Break

15.50 – 16.00 DaPaas
Speaker: Marin Dimitrov. DaPaas is a Data and Platform as a Service tool in order to optimise and simplify both publication and use of Open Data across different platforms. Marin is the CTO of Ontotext, a leading supplier in the semantic web space.

16.00 – 16.20 Tableau Public
Speaker: Bjorn Cornelis. Tableau Public is a free software that can allow anyone to connect to a spreadsheet or file and create interactive data visualizations for the web. Bjorn is a senior business intelligence consultant at Biztory.

16.20 – 16.40 Microsoft Power BI Tools
Speaker: Frederik Vandeputte. Power BI is Microsoft’s offering in the self-service BI-space. Frederik is a senior consultant and partner at Kohera and the president of the Belgian SQL Server User Group.

16.40 – 17.10 BigML
Speaker: David Gerster. BigML offers machine learning and predictive analytics as a cloud service. David is BigML’s vice-president of Data Science.

17.10 – 17.40 Dataiku Data Science Studio
Speaker: Kenji Lefèvre. Dataiku is an end-to-end solution to turn, step by step, raw data into a predictive API. Keni is Dataiku’s head of product.

17.40 – 17.45 Conclusions by Louis Dorard

17.45 – Books giveaway … Drinks and networking

Additional benefits
Attendees will receive promocodes for following books:

Need more information of have a question?
Jacques Vandenbulcke
Professor at the Faculty of Applied Economics of the KU Leuven

Join the OKFN Belgium mailing list