Triply has converted the famous Iris flower dataset to linked data! It is a multivariate dataset that quantifies the morphologic variation of Iris flowers of three different species, measured in four different properties. In this data cube, each species of Iris occurs 50 times and this linked data version uses the RDF Data Cube Vocabulary.

How to start a SPARQL service

TriplyDB allows you to expose your dataset through SPARQL. Exposing your data via SPARQL gives you the opportunity to create SPARQL queries and datastories over your own dataset or datasets from others. On TriplyDB you can already find a several examples of SPARQL queries. But creating your own SPARQL queries requires you to first start a SPARQL service over your dataset. The following step by step guide helps you to start a SPARQL service.

  1. Go to the Services page and you'll see a form to create a SPARQL service.
  2. To Create a SPARQL service you fill in a name for your service and select SPARQL from the three options.
  3. Press Create service to confirm your choices and a SPARQL service will be started.
  4. Wait until the status of the service changed to running.
  5. A new option called SPARQL will appear in the sidebar. Clicking the button opens the SPARQL editor where you can write queries over your dataset.

How to import DBpedia

The iris dataset reuses classes, properties and resources from DBpedia. This not only reduces the amount of maintenance, but by reusing objects from DBpedia we can make use of the links that DBpedia already created. But before you can use the objects from DBpedia you'll first need to import DBpedia into the Iris dataset. The following step by step guide helps you do exactly that.

  1. Go to the Graphs page and click on import a new graph
  2. Click on Add data from an existing dataset
  3. Type in DBpedia and select DBpedia-association / dbpedia from the dropdown menu.
  4. The page should now change and there is now one graph selected. This graph consist out of 369.205.380 statements and is the full DBpedia dataset.
  5. To import this into your dataset you can click import 1 graphs. This will add the DBpedia graph into your dataset.
  6. You've now imported the DBpedia graph into your dataset. You can now use the browser and see more information about DBpedia resources.
  7. To remove the DBpedia graph from your dataset you can go the graphs page and remove the dataset by clicking on the X behind the https://triplydb.com/wikimedia/dbpedia/graphs/default graph. This will remove your local connection to DBpedia.

PS: It is not allowed to start or sync a service when DBpedia is added as a graph. To start a service you will first need to remove the DBpedia graph by following step 7.

COVID-19 statistieken voor Nederland, zoals gepubliceerd door het RIVM en opgeschoond door CoronaWatchNL.

An integrated ontology for the description of life-science and clinical investigations.

Version: 1.0.0

An information artifact is, loosely, a dependent continuant or its bearer that is created as the result of one or more intentional processes. Examples: uniprot, the english language, the contents of this document or a printout of it, the temperature measurements from a weather balloon. For more information, see the project home page at link.

Version: 1.0.0

The Gene Ontology resource provides a computational representation of our current scientific knowledge about the functions of genes (or, more properly, the protein and non-coding RNA molecules produced by genes) from many different organisms, from humans to bacteria. It is widely used to support scientific research, and has been cited in tens of thousands of publications.

Understanding gene function—how individual genes contribute to the biology of an organism at the molecular, cellular and organism levels—is one of the primary aims of biomedical research. Moreover, experimental knowledge obtained in one organism is often applicable to other organisms, particularly if the organisms share the relevant genes because they inherited them from their common ancestor. The Gene Ontology (GO), as a consortium, began in 1998 when researchers studying the genome of three model organisms—Drosophila melanogaster (fruit fly), Mus musculus (mouse), and Saccharomyces cerevisiae (brewer’s or baker’s yeast)—agreed to work collaboratively on a common classification scheme for gene function, and today the number of different organisms represented in GO is in the thousands. GO makes it possible, in a flexible and dynamic way, to provide comparable descriptions of homologous gene and protein sequences across the phylogenetic spectrum.

GO is also at the hub of a major effort to represent the vast amount of biomedical knowledge in a computable form. GO is linked to many other biomedical ontologies, and is a foundation for research applying computer science in biology and medicine.

Version: 1.0.0

FALDO is the Feature Annotation Location Description Ontology. It is a simple ontology to describe sequence feature positions and regions as found in GFF3, DDBJ, EMBL, GenBank files, UniProt, and many other bioinformatics resources.

The aim of this ontology is to describe the position of a sequence region or a feature. It does not aim to describe features or regions itself, but instead depends on resources such as the Sequence Ontology or the UniProt core ontolgy.

You can read more about it our paper describing FALDO.

The ontology can be browsed here: http://biohackathon.org/resource/faldo or at http://aber-owl.net/ontology/FALDO/#/.

Version: 1.0.0

The Evidence & Conclusion Ontology (ECO) describes types of scientific evidence within the realm of biological research that can arise from laboratory experiments, computational methods, manual literature curation, and other means. Researchers can use these types of evidence to support assertions about things (such as scientific conclusions, gene annotations, or other statements of fact) that result from scientific research.

ECO comprises two high-level classes, evidence and assertion method, where evidence is defined as “a type of information that is used to support an assertion,” and assertion method is defined as “a means by which a statement is made about an entity.” Together evidence and assertion method can be combined to describe both the support for an assertion and whether that assertion was made by a human being or a computer. However, ECO is not used to make the assertion itself; for that, one would use another ontology, free text description, or some other means.

ECO was originally created around the year 2000 to support gene product annotation by the Gene Ontology, which now displays ECO in AmiGO 2. Today ECO is used by many groups concerned with evidence in scientific research.

Version: 1.0.0

An ontology for describing the classification of human diseases organized by etiology.

Version: 1.0.0

The Covid Statistics Profile data consists of the up to date Covid-19 Daily Statistics as well as the Profile of Covid-19 Daily Statistics for Ireland, as reported by the Health Surveillance Protection Centre. The Covid-19 Daily Statistics are updated on a daily basis, with the latest record reporting the counts recorded at 1pm the same day. The further breakdown of these counts (age, gender, transmission, etc.) is part of a Daily Statistics Profile of Covid-19, an analysis that utilises the data that dates back to 12am two days previous to help identify patterns and trends.

Version: 1.0.0

The Covid County Statistics data consists of Covid-19 Daily Statistics for Ireland by County as reported by the Health Surveillance Protection Centre. This data links into the OSi county geospatial Linked Data through the Unique Geospatial Identifier (UGI) for each each county.

Version: 1.0.0