Latest Projects

Research project (§ 26 & § 27)
Duration : 2024-03-01 - 2028-05-01

In this project, novel scenarios are tested and evaluated, and work is done on how to bring "common sense" into robots. The approach followed is "human-in-the-loop", combining the advantages of human experts with the advantages of AI. Specifically, this project tests and evaluates deployment scenarios of robots in forestry technology and presents new deployment scenarios. The novelty of this approach allows us to explore basics in terms of requirements, challenges and future possibilities in dealing with such systems, thus paving the way for more advanced basic projects or applications. Specifically, this project is expected to result in a series of international publications and an infrastructure to research and test the fundamentals for the use of future AI technologies and to apply them in teaching. Finally, the emerging robot test park in Tulln – adjacent to the new House of Digitalization - may also generate broad interest in the topic. The research methodology follows a 3G pioneer research approach with agile human-centered design: generation 1 testing of existing technology, generation 2 adaptation of existing technology with low-cost means, generation 3 advanced adaptation that goes beyond the state of the art and is planned together with our partners in Canada and UK - world leading robotics institutes. The infrastructure funded by this proposal will serve existing projects and is intended to spur new, larger projects (e.g.,EU). Added values are planned on three levels: 1) for the international AI research community through publications, 2) for the state of Lower Austria through a) later practical usage possibilities, and b) as an important contribution to teaching and making AI education more attractive to young researchers to counteract the labor shortage in AI.
Research project (§ 26 & § 27)
Duration : 2023-11-15 - 2026-01-14

In the last 20 years, num​e​rous road main​ten​an​ce devices have ente​red the mar​ket. The easi​ly ope​ra​ble hydrau​lic attach​ments for trac​tors pro​mi​se effi​ci​ent and cost-effec​ti​ve main​ten​an​ce of forest roads. Depen​ding on the road main​ten​an​ce device, four to five main​ten​an​ce pas​ses are requi​red bet​ween April and Sep​tem​ber to uphold the dura​bi​li​ty, dri​ving com​fort, and traf​fic safe​ty of forest roads. Con​sis​tent use can signi​fi​cant​ly extend the inter​val bet​ween major repairs. The​re are com​pel​ling reasons to main​tain forest roads using the​se road main​ten​an​ce devices, but con​cerns ari​se regar​ding the tech​ni​cal func​tion​a​li​ty of forest roads. Lack of expe​ri​ence among equip​ment ope​ra​tors or incon​sis​tent use can nega​tively impact the road sur​face. Reasons for this include the ent​ry of unwan​ted mate​ri​als into the road struc​tu​re or the tearing of the usual​ly com​bi​ned sur​face-sup​port lay​‐ers, lea​ding to com​pac​tion pro​blems. After the use of a road main​ten​an​ce device, forest owner are often con​fron​ted with com​‐plaints from recrea​ti​on see​kers in the forest. The con​side​ra​ti​on of whe​ther a forest road is sui​ta​ble for such main​ten​an​ce acti​vi​‐ties, or how a forest road should be res​to​red to ensu​re the sui​ta​bi​li​ty of the road main​ten​an​ce devices, are the goals of this project.
Research project (§ 26 & § 27)
Duration : 2022-05-01 - 2024-11-03

The progress of statistical machine learning methods has made AI increasingly successful. Deep learning exceeds human performance even in the medical domain. However, their full potential is limited by their difficulty to generate underlying explanatory structures, hence they lack an explicit declarative knowledge representation. A motivation for this project are rising legal and privacy issues – to understand and retrace machine decision processes. Transparent algorithms could appropriately enhance trust of medical professionals, thereby raising acceptance AI solutions generally. This project will provide important contributions to the international research community in the following ways: 1) evidence in various methods of explainability, patterns of explainability, and explainability measurements. Based on empirical studies (“How do humans explain ?”) we will develop a library of explanatory patterns and a novel grammar how these can be combined. Finally, we will define criteria/benchmarks for explainability and provide answers to the question “What is a good explanation?”. 2) Principles to measure effectiveness of explainability and explainability guidelines and 3) Mapping human understanding with machine explanations and deploying an open explanatory framework along with a set of benchmarks and open data to stimulate and inspire further research among the international AI/machine learning community. All outcomes of this project will be made openly available to the international research community.

Supervised Theses and Dissertations