About
Vision & Values
Our
Vision
reading time: 2 minutes
How we envision future and our work
neway.ai believes that artificial intelligence can be used for the good and can change many today circumstances to the better. In what follows we expect machines to free humans from perpetual, continuous or even risky processes, such that we can return to what makes life so valuable and what we are: self-determined. In our opinion freeing these resources will utterly result in us doing more creative and care-taking work, caring for what is really important: nature, each other and ourselves.
All of this serves ourselves and our customers likewise as neway.ai is motivated by meaningful work when in line with its values and our customers are and will always be on the safe side with their models and applications conforming with ethical principles.
Our
Values
reading time: 2 minutes
A.I. for the good
Social and economical fairness
Ressource efficiency
State-of-the-art research
Our
Charter
reading time: 6 minutes
Code of ethics
1. Social, cultural and environmental responsibility
- We affirm that we will use all our influence in the development and deployment of AI or AGI (artificial general intelligence) – if at all involved in the development of the latter – to ensure that this technology benefits us all and to prevent actively the use of AI and AGI to harm humanity or nature or to unduly empower persons or institutions.
- On this account we refuse deploying our resources for the development of
- autonomous weaponry,
- human tracking, face- and voice-recognition software,
- in the area of pesticide production and appliance,
- industrial livestock farming,
- political propaganda and
- the deliberate dispersal of misinformation.
- On this account we refuse deploying our resources for the development of
- Regarding environmental responsibility, we consider our indirect influence via a general technology for minimizing consumption to be more promising and more sensible in the long term than using AI directly in environmental projects. On this view we base our decisions.
- We follow our principle that the responsibility for any AI application lies not only with those who put it into operation, but especially with those who develop it.
2. Privacy principles and transparency
- We regard the personal freedom of each individual as his or her greatest asset. Therefore, we incorporate our privacy principles into the design and deployment of AI applications. Our principles are based upon GDPR and the Basic Law of the Federal Republic of Germany.
- In human-machine interaction we will always give opportunity for notice and consent and will also encourage all of our clients to do so.
- We promote system architectures that allows users to have access to her own data and should be able to deny the communication with or service from an AI.
- We see it as an important task to promote transparency in AI and design AI that provides constant feedback and explanation to its beneficiaries. For this purpose, such an AI application will always be subject to appropriate human direction and control.
3. Safety
- One fiduciary duty we see in our work is to ensure that AI and AGI we develop and deploy is safe. Next to German and European legal rules and political charters we follow AI community guidelines laid out by leading researchers in the field of artificial intelligence and machine learning. Also, we test our software in constrained environments and help to monitor it during the length of its deployment cycle.
- Sources we include in our safety consideration:
- Any AI we develop or put into operation will either have a circuit breaker built in or we will clearly indicate such a necessity at delivery and inform clients about residual risks.
- We commit ourselves to publish our own findings and bring them back into the community
- However, we reserve the right not to publish research that could have unforeseeable consequences or easily lead to misuse.
4. Fairness
- We are aware that bias in AI can occur and can have a serious impact on social and economic processes which endangers our principle of fairness. Therefore, it is our concern to maintain or improve equal rights and opportunities through our AI applications.
- We also recognize that fairness is sometimes subject to great cultural differences and it will not always be easy for us to identify these from the beginning. However, we will always try to avoid unjust consequences for people, especially those related to religion, race, gender, sexual orientation, disabilities, social origin or political affiliation.
- In the spirit of equal opportunities and in the light of economical leapfrogging by technology, we want to make AI accessible worldwide. For us, this means first and foremost the close exchange with academic teaching and other practitioners in the field of computer science, robotics and artificial intelligence. In addition, we see it as our duty to provide our expertise especially in the education of small and medium-sized businesses.
5. Social mandate
- We commit ourselves to apply uniform and principled rules for the careful development and conscientious use of AI and to promote them at national and international level. For this reason, we support a scientific and operational exchange to form a common and proven practice for the use of AI.
- We are open to inquiries from the legislature, educational institutions or aid organizations.
6. Strict scientific discipline
- Technological innovation requires principled science based on correctness and accuracy of content, transparency, intellectual integrity and adequate methods. We commit ourselves to uphold these high standards in our own research.
- We believe that a free exchange of ideas is one of the main drivers of fast technological progress and every person worldwide should benefit from it. For this reason, we plan to maintain our close contact with the broader scientific community, in academia and industry. In addition, we have learned that scientific publications have always been an integral part of our own research and development. This brings us to our decision to also publish our own research results and wherever possible open-source code.
- We expect that in the future we could stop publishing our research for the benefit of security and the public good. Instead, we would then contribute through research in the areas of security, policy and standards.