You are browsing the archive for Events.

Open Call for Open Presentations

October 7, 2015 in Events, Featured

Do you want to contribute to the Open Belgium 2016 conference in Antwerp? Submit a proposal in our Open Call, you have until the 15th of November 2015 to propose a session.

Open Data Day Flanders 2015

May 11, 2015 in Events, Featured, local governments, open data

Open Data Day

The Flemish Government claims to have attached more value to Open Data these  past years. But those words don’t have much value if they aren’t combined with any acts. But what has the Flemish Government been doing in the Open Data arena? Well, that’s something you can find out at Open Data Day.  And more importantly, maybe you’ll have the chance to give your opinion. What’s even more, you can participate in Open Data Day completely free.

The nineteenth of June, Open Data Day -the fourth edition already- will take place in the Boudewijngebouw in Brussels. This year is all about transparency and services. And in order to get the most out of Open Data, citizens and companies will be heard too. How? By making use of round-table discussions. All participants can debate on a specific topic. In this way, the Flemish Government can collect ideas and thoughts, in order to work out an Open Data policy that has a widespread support.

Experts and citizens combine powers

There will be five discussions in the morning and five in the afternoon, each one handling a different subject. Each debate has twenty participants, of which sixteen are considered to be ‘experts’. Those experts are derived from the public sector, the industry and civil organisations. The remaining four participants are citizens, two men and two women. If you want to take part, you can do so by registering yourself on the Open Data Day website. Participating in this is free of charge.

Realising ideas

There will be five subjects to discuss, being mobility, environment, economy, statistical data and geographical data or geodata. Through the round-table discussions, the most valuable concepts will come forward and in that way, the Flemish government knows which ideas are interesting for both the governmental side and the citizens side. Those ideas will be followed up and if possible realised in 2015 and 2016.

Remainder of the day

The remainder of the day is full of interesting keynotes and presentations, most of them about Open Data projects that are already up and running. You can find the programme here.

In short, this day is an absolute must for everyone whose interested in of involved with Open Data. So, why don’t you register right away?

Open data & Biodiversity Research

May 6, 2015 in Events

There is a world of tools, standards and data out there, ready for you to use.

“Biodiversity Informatics” deals with the application of informatics techniques to biodiversity information for improved management, presentation, discovery, exploration and analysis of scientific data. Combined with open data sources on the Internet, this is a powerful new approach in doing research.

Bringing these tools and data to scientists, citizen scientists or open data adepts is one of the task of the Belgian Biodiversity Platform. To achieve this goal we decided to organize the “Empowering Biodiversity Research” conference on May 21th in Brussels 2015.

During this conference, we will like to take you on a tour to the world of biodiversity informatics & open data. Several interesting projects will be highlighted, for example, you will get to know the whereabouts of Eric the seagull, a dataset published as open data on the internet and learn how to deal with rare animal occurrences and putting them in context.


Whereabouts of the Belgian tagged Seagulls

We will also look at Antartic biodiversity and find out how you could use a taxonomic backbone to improve your biodiversity data.



Antarctic Biodiversity Portal

During this conference we will try to make the link between open data initiatives and freely available biodiversity data on the internet. The Global Biodiversity Information Facility is the worlds biggest aggregator of data when it comes to this type of data. We hope to realise some fruitful discussions with interested persons coming from the open data community.

For more information on this conference, please visit the website. Registration deadline ends in a couple of days, on May 7th. And yes, there is a networking event foreseen.

Last chance to enter Hackastory’s Hackathon!

April 30, 2015 in Events


Want to flex some interactive story muscles?

Join the Hackastory hackathon during !Flab // 2 & 3 may!

The subscriptions are going fast! Hackastory & !FLAB are still looking for creative coders and designers to enter the Hackastory Hackathon on the 2nd and 3rd of May in Gent (BE). Interested to participate and share your knowledge with others? Enter the Hackathon now!

Hackathons are great for experimentation and finding new perspectives on ideas and concepts. By building your idea quickly (rapid prototyping) you’ll discover the bigger challenges for the long run of your project. Besides the practical positives, it’s a great place to be. Generating a creative atmosphere with like minded dare devils is one of the crucial aspects to get something done.

Sounds interesting?

Goal of the hackathon:
…to experiment and to build a prototype for new way of interactive digital storytelling in 48 hours.

We offer:
…a comfortable place to work for max. 25 people
…to start or refine a project
…breakfast, lunch and diner
…a creative atmosphere

These following factors are essential for a great hack:
… you need to be open to collaboration.
… you’re not afraid to fail
… you find process as important as product
… you dare to dream big
… you will share the result open source with the world wide web

Good to know:
… participation fee is 50 euro’s (including drinks, breakfast, lunch (2x) and dinner)
… it’s all open source and we’ll share the results of the experiments
… to have a great hackathon you need a good mix of skills. So, there will be a selection procedure.

After the Hackastory Hackathon has finished, you can fine-tune your newly developed prototype at !FLAB’s booster workshops.

Apps for Ghent is nuttig voor meer dan enkel hackers

March 20, 2015 in Events

Op zaterdag 21 maart gaat de hackathon Apps for Ghent door in de centrale bibliotheek met als thema ‘de bib van de toekomst’. En ook voor het niet-hackende publiek is deze hackathon interessant. In de loop van de dag zijn er een heel wat workshops en presentaties die voor iedereen toegankelijk zijn.


1. Bouw nu gratis robots met Timelab:
foto (11)_0

In het kader van Apps for Ghent biedt Timelab twee workshops aan voor bibmedewerkers: ‘the most useless machine’ en ‘beambots’, stap-voor-stap workshops om een simpel robotje of machientje te bouwen. Er is geen voorkennis vereist. Deze workshops zitten in het aanbod van Timelab en worden uitzonderlijk gratis aangeboden tijdens Apps for Ghent. Hier vind je meer info:

10.00-13.00 most useless machine
14.00-17.00 beambots

Inschrijven voor een van de workshops: Graag inschrijven vandaag!

2. Werk aan je eigen privacy op het privacy café:

In samenwerking met de Liga voor Mensenrechten willen we ook de ruimte geven om te leren hoe je zelf met data beter kan omgaan, en hoe een verbeterde privacy meer ruimte geeft om echt zinvol met open data te kunnen werken.
Om meer mensen inzicht te geven in wat je zelf kan doen organiseren we tijdens Apps For Ghent een Privacy-café, waar je door middel van een aantal mini-workhops tools aanleert die je helpen meer controle te houden over jouw gegevens.

Van 12:30 tot 15:00 uur.
Voor meer info en inschrijven op:

Open Sessions

Na en naast de workshops is er nog meer te doen. In de Musschezaal is er ook een Pop-Up makerspace van Nerdlab waar bezoekers al een proevertje krijgen van hoe de bib van de toekomst er zal uitzien.

Om 15.30 zijn de effectieve pitch sessies van de hackers die de hele dag een toepassing hebben zitten construeren op basis van de Open Data van Stad Gent.

Om 17:30 zijn er een aantal datatalks:

  • Open Acces: Inge Van Nieuwenburgh
  • Case Datajournalistiek ism REC

We hebben om 18:00 een receptie en de award uitreiking om 18:30.

Dus voor ieder wat wils.
Het volledige programma van Apps for Ghent vind je op:

A quick guide through Open Science

February 17, 2015 in Events, Featured, Openbelgium15, OpenBelgium2015

“Even if the open windows of science at first make us shiver after the cozy indoor warmth of traditional humanizing myths, in the end the fresh air brings vigor, and the great spaces have a splendor of their own.” Bertrand Russell

Note for all you TL;DR people: Check out this clip about Open Access by PhD comics.

Open Science

Open Science can be described as the movement aiming at integration of ‘open’ workflows in the whole research cycle: from the actual research, to publishing research results and data.
During this session, we will mainly focus on the publication of research results (publications and data) – and to try to make these as broadly accessible as possible for as many people as possible (Open Access to research).

Based on processes and workflows already firmly established in other areas (such as software development), researchers have become increasingly aware that they are not operating in a vacuum – and that their research can reach a much wider audience than only their direct peers. Especially for the born digital generation, the possibilities for disseminating their work are no longer aligned with what the traditional research publication system (based on digital versions of paper journals, their ranking, high subscription prices and strict copyright restrictions) has to offer. On top of this, there is also an access problem – perhaps not that obvious when you’re affiliated to a research institution that can afford expensive journal subscriptions (even then it’s sometimes problematic!), but very clear when this is not the case (think about journalists, health professionals, teachers, independent consultants, SME’s, but also many researchers in the developing world …). The Open Access movement has tried to fix these issues following two, complementary, routes: encouraging researchers to deposit digital versions of their work in Open Access archives (‘repositories’) and reforming the scientific publishing system – encouraging existing and new journals to ‘go Open Access’ and don’t charge readers anymore to read the articles. This has been a relatively successful process: 5 years ago, at best 8% of all research was available in some sort of Open Access. Anno 2014, this number is up to 50%.

There were (and still are) some bumps on the way though: in some fields, Open Access awareness is still very low – and Open Access research is still often perceived as low-quality research (the fact that most Open Access journals are still very young has consequences for their ranking in traditional journal qualification systems). Copyright restrictions and strict licensing are still a big obstacle with a lot of publishers (a problem even more stringent when talking about research data and text and data mining).

Additional problems are caused by the so-called ‘article processing charges’ (APC) levied by Open Access publishers to compensate for the loss of revenue due to the abandonment of subscription charges. Ideally intended to cover publishing costs and to ensure economic viability of the publisher, some publishers charge unreasonably high APC’s – making ‘author pays’ Open Access a very interesting and profitable business model for scientific publishers.

The large scientific journals have found an at least questionable way to exploit Open Access commercially (‘hybrid Open Access’: charging APC’s for individual articles while not making the whole journal Open Access). Also, there has been a rise in low quality (sometimes even fraudulent) Open Access journals – charging high APC’s while not delivering on the quality standards expected by the submitting researcher.

Luckily, there are plenty of initiatives tackling these issues. Trying to do Open Access ‘the right way’ has become a subject of interest for plenty of publishers, researchers, library and research administration staff and policy makers. During this Open Science session, we’ll be hearing from 4 of them:

Bernard Rentier (Université de Liège and Enabling Open Scolarship) and Inge Van Nieuwerburgh (Universiteit Gent) will talk about the successful Open Access policy they have put in place at their respective universities: requiring researchers to deposit all their research immediately upon acceptance into the institution,s repository and providing Open Access to it as soon as possible. This policy model has been an inspiration for the very influential Open Access policies now in place at national and international level (for instance in the 80 billion Horizon 2020 programme by the European Commission). Inge will also address several of these national and international policies as well, figuring out if and how they affect Open Access adoption amongst researchers worldwide.

We’re also happy to have Brian Hole from Ubiquity Press on board: he will explain how his publishing company combines a fair business model with state-of-the-art publishing workflows.
And, last but not least, there’s Joseph McArthur. As a student he was one of the developers of the Open Access Button. Now graduated, he’s one of the most active and prolific Open Access advocates around, working for the Right to Research Coalition.

Of course, we are also counting on you. What are your experiences with Open Science? Do you have any questions for the panel? Don’t hesitate to contact me this week! Tweet, mail or send me a postcard.

(oh, and I am Gwen Franck. I work for Creative Commons as Regional Coordinator Europe, and for EIFL as partner in the European Open Access projects FOSTER, OpenAIRE and PASTEUR4OA ). Occasionally I also tweet for Open Access Belgium, a collaboration between UGent and ULg).

If you want to read more, check this:

Why do we need OpenStreetMap? It’s the community stupid.

February 10, 2015 in datadays2014, Events, Featured

There are exciting times. Opendata is everywhere! In the past couple of year we have seen a lot of very interesting data open up and as a result things have changed. Startups have popped up everywhere related to opendata and some very successfully.

OpenStreetMap celebrated it’s 10th birthday in 2014, it’s been around for ages when talking about opendata. A very relevant fact related to the topic of opendata is the fact that the project got started because of a lack of open geo data to experiment with. The question that then comes to mind is: Why would we need OpenStreetMap in a world where all (geo)data is open?

From an OpenStreetMap-community-member perspective the answer to this question is obvious; it’s the community, stupid!

In our session at the OpenBelgium conference in Namur we try to give you an inside view of our community and all of it’s different aspects and activities. We hope that those who attend our session will also consider our community as thé answer to the question of why the world needs OpenStreetMap in an open world.

OpenStreetMap is so much more than just an open geo database. If you are looking for new ideas related to geo, want to know more about OpenStreetMap or if you want to become part of our community make sure you don’t miss out and attend OpenBelgium!

Want to know more about the OpenStreetMap community? Come to the Open Belgium Conference OpenStreet Map session and find out what this community looks like and how members contribute.
Or follow the Belgian OSM Community on Twitter or their website.

Cover-image CC-BY-SA

From raw data to finely crafted mosaics: the importance of standards

February 3, 2015 in Events, Featured, Openbelgium15, OpenBelgium2015

Now that large amounts of open data are becoming available, along with efficient visualization tools for their respective types, one of the next challenges is to make sense of these data in the scope of particular domains and use cases. Be it enriching a breaking news video with relevant graphs, contextualizing a budget report with related public policy excerpts, or bringing city statistics to life with localized pictures, it’s all about finding the right datasets that bring sense to each others. A fair part of making that sense lies in the ability to discover the right data, deconstruct it and tie the fragments together in mosaics that carry more information than the sum of these elements.

On the path to data valorization, the first step is discoverability of data. While cataloguing tools and open formats are now becoming mainstream (cf. CKAN and its numerous public deployments), usage of open metadata standards is still lagging behind. Sometimes because of proprietary metadata structures that prevent cross-domain discoverability, more often because datasets lack proper metadata altogether. If the former is being solved by the emerging use of standardized vocabularies (DCAT, INSPIRE, to name a few), the latter is mostly a matter of raising awareness, in all data publishing bodies, that metadata is just as important as data.

The next step in data reuse is the ability to transform data to match the tools and frameworks where data is to be used. Having data in a open format is good, but there often exists multiple potential open formats for the same dataset, and each context of use comes with a set of tools that may support only some of them. CSV’s may need to be turned into KML, or XML into JSON. This is where on-the-fly data transformation tools such as The Data Tank come into play, and ease up data processing by removing format friction.

Lastly, real added value can be created by going below the surface of the datasets, i.e. by no longer consuming datasets as unsplittable entities, but rather chunking them, taking the relevant parts for the subject at stake, and stitching the fragments into meaningful data mosaics. Some standards exist or are emerging to tackle that problem, like URI Fragments, Open Annotations, and the whole Linked Data toolbox, but a complete stack for the authoring and publication of such mosaics is still to be produced. Once achieved, such an environment would allow anyone to easily deconstruct datasets, build contextualized data mashups and exchange them as documents on their own, while relying directly on the original, remote data sources.

Curious to find out more? Come to the Open Data Tools and Standards session at 13.30 in the Auditorium Félicien Rops, where we will discuss this further.

Entrance bursary requests opened for Open Belgium 15

January 22, 2015 in Events, Featured, Openbelgium15

Organising a community conference such as Open Belgium takes up a lot of time, effort and money. And to cover for those costs we need to put a price on our tickets. But to ensure everyone has a chance to come to this event and discuss open knowledge in Belgium, we happily announce that we are providing 5 bursary tickets to students and people with no steady income. All you have to do obtain one of these tickets is to fill in the form below. After the deadline, we will assess who deserves these tickets and notify everyone who has applied.

Let’s meet at the SAI Data Summit Brussels

December 15, 2014 in Events, Featured

SAI, “Studiecentrum voor automatische informatieverwerking” or ‘studycenter for automatic information processing’ is organising SAI Data Summit Brussels on the 5th of March 2015. This is an event where Open data, Big data, Smart data, Linked data tools are presented on the same day. Because Data is the new oil for our economies and you better have the right toolkit at hand. During this event, SAI will present leading and cool tools to crawl, clean, convert, visualise and analyse data. And, we found this important, the tools are affordable for everyone. So this event is for all people interested in data: from data analysts, data scientists, data journalists, open data evangelists, innovators to hackers and more.

The Master of ceremonies is Louis Dorard of ‘bootstrapping machine learning’ fame. He will guide you through this event packed with interesting tools.

Location: Van der Valk Brussels Airport Hotel, Brussels
Tickets: Register here
Date: 05-03-2015 (13:15 – 17:45)
Language: English
125 EURO for SAI-Members.
175 EURO for non-SAI-Members

13.15 – 13.30 Registration

13.30 – 14.00 Tackling data: what can we do with data and which tools come in handy ?
Speaker: Louis Dorard. He is author of Bootstrapping Machine Learning and co-founder of, the International Conference on Predictive APIs and Apps. He is data consultant and partner at Codole. He studied machine learning at University College London.

14.00 – 14.30
Speaker: Alex Gimson, European Evangelist at is a service that turns any website into a table of data or an api. Web scraping on steroids.

14.30 – 14.50 OpenRefine
Speaker: Ruben Verborgh. OpenRefine is an open source tool for working with messy data: cleaning it, transforming it from one format to another. Ruben is researcher in Semantic Hypermedia. He works at the Multimedia Lab of iMinds, University of Gent. He is co-author of ‘Using OpenRefine’ published by Packt.

14.50 – 15.00 The DataTank
Speaker: Jan Vansteenlandt. The Datatank transforms any type of raw or binary data into machine or human readable (semantic) data and automatically provides a RESTful API on top of it. Jan is one of the co-creators and developers of the datatank software.

15.00 – 15.30 Datawrapper
Speaker: Mirko Lorenz. Datawrapper is an open source tool to create simple, correct and embeddable charts in minutes. Amongst others in use at ‘de Standaard’. Mirko is a journalist/information architect who conceived the idea for the project in 2011. He is co-author of the Data Journalism Handbook and a trainer for data-driven journalism.

15.30 – 15.50 Coffee Break

15.50 – 16.00 DaPaas
Speaker: Marin Dimitrov. DaPaas is a Data and Platform as a Service tool in order to optimise and simplify both publication and use of Open Data across different platforms. Marin is the CTO of Ontotext, a leading supplier in the semantic web space.

16.00 – 16.20 Tableau Public
Speaker: Bjorn Cornelis. Tableau Public is a free software that can allow anyone to connect to a spreadsheet or file and create interactive data visualizations for the web. Bjorn is a senior business intelligence consultant at Biztory.

16.20 – 16.40 Microsoft Power BI Tools
Speaker: Frederik Vandeputte. Power BI is Microsoft’s offering in the self-service BI-space. Frederik is a senior consultant and partner at Kohera and the president of the Belgian SQL Server User Group.

16.40 – 17.10 BigML
Speaker: David Gerster. BigML offers machine learning and predictive analytics as a cloud service. David is BigML’s vice-president of Data Science.

17.10 – 17.40 Dataiku Data Science Studio
Speaker: Kenji Lefèvre. Dataiku is an end-to-end solution to turn, step by step, raw data into a predictive API. Keni is Dataiku’s head of product.

17.40 – 17.45 Conclusions by Louis Dorard

17.45 – Books giveaway … Drinks and networking

Additional benefits
Attendees will receive promocodes for following books:

Need more information of have a question?
Jacques Vandenbulcke
Professor at the Faculty of Applied Economics of the KU Leuven

Join the OKFN Belgium mailing list