Minutes

DAY 1

Wednesday 22 June 2022

Safeguarding the Panhuman values of democracy, freedom, and privacy

The first artificial intelligence conference connecting the credal of technology with the credal of democracy has officially started.

18:00 Professor Periklis Papadopoulos, Aeronautics and Astonautics, SJSU

Professor Periklis Papadopoulos connected from San Jose, California to introduce us to AIIA’s work and mission.

18:00 Despina Travlou, Managing Director, Slide2Open Communications

18:00 Angelos Roupas, Founder, Second Wind & Partners

Forum’s moderator Mr Angelos Roupas began his speech by highlighting the importance of cautious use of artificial intelligence.

18:00 Marcus Murbach, Principal Investigator in NASA Ames Research Center

18:10 Leonidas Christopoulos, Secretary General of Digital Governance and Simplification Procedures at Greek Ministry of Digital Governance

Leonidas Christopoulos, the General Secretary of Digital Governance and Simplification Procedures, opened the forum talking about AI & public administration, and the transformation of the sector since its introduction. He stressed the importance of strengthening public debate on AI. To this end, strong regulatory instruments, policies that respect democracy, enhancement of education and society are essential.

18:30 Dr Deniz Yuksel Beten, Senior SPS in NATO

Dr Deniz Beten presented NATO’s SPS programme in detail, informing us about the projects, training and workshops they provide for those interested in AI, cyber-defence and counterterrorism.

18:50 Michael J. Penders, Cybersecurity program manager at TUV SUD, Former Prosecutor USA

Mr Penders’s speech was focused on the right to information in a post 9/11 world. He explained how technology’s development is so fast nowadays that institutions can’t keep up, hence the need for policy making, standards and strategy in the AI world. He also stretched the need for accountability.

Democratizing the power of AI

All four speakers shared their opinions on how they see the democratization of AI take place.

19:10 Dr Nikos Bogonikolos, President of Atalos Group

Dr Bogonikolos said that democratizing AI is in fact easier than we think it is. In fact, fears that the public might have such as the replacement of humans by robots could be overcome if we expand our knowledge and understanding of AI. People think that AI presents a threat against them in the workplace which isn’t true according to him because in a democratic environment everyone has rights, regardless of their capabilities. Once again, the importance of policy making was discussed. If there is a set policy for AI, then everyone would operate in peace.

19:10 Aris Dimitriadis, Executive Director Compliance and Risk, OTE

Mr Dimitriadis agreed with Dr Bogonikolos and Dr Karachalios that policy making is crucial, adding that there is no reason to be afraid of new technologies- upcoming AI regulation shouldn’t be seen as an obstacle, but as a mean of easier communication within the society.

19:10 Dr Konstantinos Karachalios, Managing Director of IEEE

Dr Konstantinos Karachalios mentioned that there are several aspects to it, placing importance on creating a trustworthy system based on criteria such as chain of accountability, transparency, question of discrimination and bias and finally, respect. Additionally, he stretched the need for policy making for platforms that use algorithmic bases to undermine democracy. He closed his speech by once again mentioning that technoscientific communities need to assume responsibility for their actions, reflecting on past mistakes to better their future ones.

19:10 Dr Vangelis Karkaletsis, Director of IIT

Dr Karkaletsis agreed with Dr Karachalios. He mentioned that putting AI in the service of the society and infusing it with democratic values will lead to the democratizing of it. Transparency, diversity, fairness, non-discrimination, societal and environmental accountability are some extra criteria that will lead toward the democratizing of AI.

20:00 Professor Barry O’ Sullivan, University College of Cork

Professor Sullivan began his speech by reminding us that AI has been a part of our lives for 70 years now. We use AI all the time, it’s connected to more sectors of life than we can even imagine. Of course, that brings up dangers. He raised the question of whether AI should have rights, adding that we need to be conscious of what we can achieve by using AI and what we cannot. He finished his speech by adding that there is immediate need for producing ethical guidelines for AI, as well as for methods to assess whether they work or not.

All four speakers were asked the following questions: ‘how do we put the democratization of AI in practice?’, ‘How does upskilling contribute to AI democratization?’, ‘Should we democratize AI? If yes, what should we democratize and for whom?’, ‘Are there challenges in the way?’, ‘Where do you see AI in the next 5 years?’

20:20 Dr George Giannakopoulos, Researcher, NCSR Demokritos, CEO at SciFY PNPC

Dr Giannakopoulos replied that putting democratization of AI in practice is a very challenging issue that cannot be resolved without strategic planning.

Can societies strive for a better AI? Dr Giannakopoulos replied that we can, and we should, since a trustworthy AI is the ultimate goal. Policy makers need to contribute and create policies based on everyone’s needs.

As his closing statement Dr Giannakopoulos agreed with Dr Nikolopoulos regarding the last question. The machine will assist the human in decision making- that’s the optimal solution we will see within the next 5 years.

20:20 Dr Christos Gizelis, Principal Innovation Analyst, OTE Group

Dr Gizelis replied to the second question, explaining that the lack of knowledge in a society can be a holding factor from reaching an acceptable level of innovation. We are at a point that we have innovation culture as part of our DNA- unfortunately innovation alone isn’t enough. We need to raise awareness and get educated on innovation systems- AI included. If we don’t share its benefits, we will never move forward. He also mentioned that upskilling allows us to align common goals and to express our needs in a clearer way in an everchanging society.

Are there challenges in the way? The principal Innovation analyst mentioned that AI adoption by organizations is in its infancy therefore there are many challenges ahead. The technology needs to mature, the lack of experienced staff needs to get educated as well. The transition from concept to deployment is harder than one can imagine.

20:20 Dr Vassilis Nikolopoulos, Head of Applied Research and Development, Innovation Energy and NG BU, MYTILINEOS SA

Dr Nikolopoulos replied to the first question as well, mentioning that AI isn’t hard, businesses just need upskilling to advance their skills and perform better.

Where do you see AI in the next 5 years? In many conferences you might have heard the saying ‘when you put a human against the machine the machine wins, but when you put a machine against a machine and human, the duo always wins’ Dr Nikolopoulos sees a hybrid future for AI. The development and use of weak AI applications within the next 3 years is going to be crucial. He sees a hybrid model, the machine is there to do complex math solution, but at the end it will be the human that will make the final decision.

20:20 Dr Xenia Ziouvelou, co-chair of AI at the European Digital SME Alliance AI Working Group (FGAI), Researcher, NCSR Demokritos (GR) & University of Southampton (UK)

Should we democratize AI? If yes, what should we democratise and for whom? Dr Xenia Ziouvelou replied to the last question by explaining that we need to ask what ‘democratization of AI is’ before we begin answering any questions. With more people having access to it we open the way for more innovation. But that’s not all we mean- as a matter of fact a more holistic point of view is doable. Infusing all the moral ideals and rights of democracy in AI. Democracy as a moral idea, not a political one. She adamantly believes that we should democratise AI because that way we can drive a number of significant benefits from it for our society. Democratizing AI is going to reduce entry barriers for individuals and nations and increase innovation levels since all the tools needed to create are already available. Less cost of building an AI solution. Development of talent. Higher speed of adoption of AI for academic and business world. We need to democratize for everyone, public sector, academia, businesses/ companies. As a closing statement she reminded that balancing obligations between smaller and bigger AI providers is necessary for democratizing it.

Protecting data and privacy in a world of exploding data

21:20 Robert Sobon, Senior Legal Analyst at Raytheon Technologies

21:20 Professor David Opderbeck, Professor of Law, Seton Hall

Professor Opderbeck raised a very important issue about AI ethics and their use. We should limit the amount of information we collect and even anonymize it so that we don’t have anyone’s personal information. Is that even possible though when it comes to large data sets? Could that be anonymised? Professor stretched the need for applying GDPR globally, as right now it’s in place only in Europe. According to him people need to be asked explicit consent since its more precise and offers opting out, therefore its safer for the individual. While closing, he agreed with the plethora of speakers before him, that there should be some human check on how AI works, as well as policies that notify individuals when their data is being manipulated.

21:50 Dr Konstantinos Karachalios, Managing Director IEEE

Dr Karachalios was asked to reply to the question ‘What does advancing technology for the benefit of humanity mean?’ In his response he mentioned that there are a few issues that can be addressed, such as quality assessment of AI, the possibility of AI systems undermining foundations of democracy as well as civil rights and finally, children’s rights. Insistence on how there’s lack of children’s rights online. Age-appropriate design code needs to be implemented. He also added that taking the lead and assuming responsibility are two pressing issues when dealing with AI. Dr Karachalios presented an action plan that IEEE has implemented to regulate AI and assume accountability called ‘IEEE CertifAIEd’.

22:10 Blaise Aguera y Arcas,Vice President and Fellow at Google Research

The ethics of foundation models are the future of AI. Most of their training is unsupervised, its simply based on reading the web. That’s what makes them multi-modal. AI foundation models have achieved a level of reading the web that’s unprecedented, being able to find hidden words, numerical reasoning and fill in random blanks. The most recent model is PaLM, composed of 540 billion parameters, which is able to understand and explain jokes. Finally, Mr Blaise predicted that AI will remain a topic of disagreement for the foreseeable future. Discussion with Dr Konstantinos Karachalios, Managing Director IEEE following the end of his presentation. Two professionals in the same section, having different opinions on AI. Discussion about ‘what it means to be human’, ‘how can we use our influence toward something collectively beneficial’ and ‘how to create policies so that AI companies take responsibility’.

22:55 Vasilis Papakonstantinou, Partner Blue Dome Capital & Vise-Chairman MIT Enterprise Forum Greece

In his live speech Mr Papakonstantinou talked about our mobile society by comparing it to the Ali Baba story and the cave that would only open via the ‘open sesame’ phrase. As a society we are relying on a password, the magic word that is going to reveal our digital treasures. Unfortunately, our passwords aren’t able to protect our digital assets all the time since hacking has reached new levels these past years. There are ways to avoid being hacked, specifically by using complex passwords and updating them every month. Multi-factor authentication is also recommended.

AI/ML in space

23:30 Kelli Kedis Ogborn, Vice President of Space Commerce and Entrepreneurship, Space Foundation

Moon and Mars are no longer a wish- they are a fact. Interplanetary exploration and space economy are two plans sent in motion, which can potentially scare people who aren’t in the industry, but that shouldn’t be the case. It is estimated that by 2040 the global space market will be worth 1 trillion dollars. This estimation alone attracts people who understand that space has a lot to offer. Space is the future since it has no shortage of strategic creative innovations. AI and ML go hand in hand when it comes to space. Medical assistance in space is heavily based on ML advancements since we have reached a point where we can predict our medical needs in the future. Same thing applies for astronaut assistance and space debris- predictions for future needs/ actions.

23:50 Dr Tarek Taha, Senior Researcher, Brisk Computing

Dr Taha presented trends in the AI/ ML sectors through a series of requirements in space such as energy efficiency, radiation tolerance, in-situ training and finally autonomy.

00:10 Jean Muylaert, Academician SB-RAS and IAF

Dr Muyalaert stretched the importance of cyber security in our times, as well as its further development. The digital revolution we are going through has made big data collectors vulnerable to cyber-attacks. Cyber range tests are put in place to assess the level of vulnerability.

00:35 Dr Janette Briones, Senior Researcher, NASA Glenn Research Centre

00:35 Dr Rachel Dudukovich, Senior Researcher, NASA Glenn Research Centre

Joint presentations/ speeches by the professors, Discussion based on why NASA needs its own autonomous communication system with internal decision making to be able to successfully meet challenges presented by commercialization. The space communication environment is more challenging than one can imagine, therefore additional communication support could be needed to combat difficult conditions and maintain performance. Glenn is creating a cognitive communication system prototype (called CE – 1) that will provide enhanced interoperability and optimise performance for future NASA expeditions/ missions. AI and ML are going to help construct a tolerant network, able to accommodate routing in multi-hop.

00:55 Professor Hua Harry Li, Computer Engineering, San Jose State University

Introduction to reinforcement learning method.

01:15 Dr Rogan Shimmin, Senior Engineer & Technical Program Manager, US Department of Defence

Presentation of the Defence Innovation Unit. Its mission is to solve national security problems, both on Earth and in space, by working with DoD and commercial partners.

DAY 2

AI and the Environment

Mr Roupas Pantaleon introduced and welcomed Prof Synolakis, who discussed the protection of the environment using AI. Prof Synolakis explained that AI can give better weather predictions, as it can reduce uncertainty. As for AI and net zero, Prof Synolakis explained that we first must understand what net zero means, make better estimates of carbon footprints. Buildings are one example – we must be able to measure, monitor, estimate and then reduce carbon emissions.

AI and its impact on human capital management

Ms Mamalaki explained that 1 in 4 corporations use automated or AI technologies for their human resource activities, including recruitment. AI can find cases of employee dissatisfaction. Strengths and weaknesses are identified, but there are also constraints and challenges, the ethical use of huge amounts of data being one of them. Integrating AI with the human element is the way ahead.

Mr Diorinos presented Bryq, a talent intelligence platform, helping HR managers and CEOs in recruiting, hiring, and keeping talent. Having a process that we know how it works, know, and combine the data, then we can teach the machine. AI is as good as the processes we teach it. Culture is important: it can be codified and applied to internal and external candidates.

In Anna’s question if AI can be ethical, Markellos said that the definition of “ethical” is a moving bar and this is probably a sign of progress, but also a challenge. Anna believes that AI is a tool, not the solution to everything.

AI and Metaverse

The next session, chaired and moderated by Ms Liadis, discussed AI and the Metaverse.

After introducing the panelists, Ms Liadis gave the floor to Dr Bogonikolos, who explained what Metaverse is and noted that AI will populate Metaverse. Blockchain technology is the generator of Metaverse. Synergies of AI and Metaverse exist e.g. in retail, healthcare, or smart cities. The Metaverse can create a new society.

Dr Sotiriou explained that Metaverse helps bridge the gap between formal and informal learning. The interactive exhibition of Myrtis in Metaverse was launched in April 2021. It has been presented in different places and has moved to different locations. Content can be modified according to the profile of the user (more on www.myrtis.gr).

Dr Patseas discussed how a modern organization can use innovative software, something that was not possible a few years back, e.g. using algorithms to extract information and understand intentions. They use virtual reality to train people for hazardous situations. He urges people to discover the technology that is already here and can make great applications for almost all industries.

AI in Supply Chain and Logistics Ports and Shipping was the title of the next session, chaired and Moderated by Mr Mavridis.

Mr Mavridis introduced the theme of the panel, explaining that shipping in ancient Greece extended all over the Mediterranean. Shipping was and is the primary way of freight transport – road and rail come far behind. Many stakeholders relate to shipping. Is AI a threat or an opportunity to shipping?

Mr Garcia explained about the 5 clusters of Abu Dhabi ports, one of which is the digital cluster. Automation is not just about saving manhours. During covid they increased efficiency and capacity in Khalifa port, even as people were working from home. Wise investments in technology made that possible. A port is like a closed area: the more one can automate, the better. Technology must be used wisely.

Mr Pyrgiotis, introduced their work, the commercial management of some 40 vessels. Their initial vision was to use data and analytics to outperform competition. One of the AI solutions they have developed “reads” emails and classifies them according to content. Vessel tracking and monitoring cargo demand help in making better deployment of fleet decisions. Digital twins of the entire value chain can be implemented (it has already been implemented in the air traffic industry). There is a lot of value to be gained from AI in shipping, but we are not there yet.

Mr Georgopoulos discussed how AI can assist in operations. The idea is to create AI systems that assess in real time the vessel’s environment, taking into consideration the management’s choices: a system that takes human inputs, combines requirements, and guides the crew. We are at the beginning of AI in shipping and AI should be utilized to solve specific problems of the shipping industry.

Mr Teriakidis discussed the work of a classification society, an important part of which is to check if the vessel meets global standards and complies with international regulations. As the years go by, the “low technology” sector of shipping is changing shipping companies, classification societies, brokers, charterers etc. are using more advanced technology, including AI, which goes through large sets of data efficiently and to the point. Slowly, this also comes into the vessels, for instance now we have the case of an autonomous electric ship. Only electric vessels can now be autonomous; they are rather small and travel short distances. Collaboration is the key word to bringing AI into shipping.

On the final session of the day chaired and moderated by Mr Scaramella, Ms Armellin, Mr Ciavoli Cortelli and Ms Agliulo from Capri Campus presented their work, while Prof.Trenta and Mr Di Maio shared some thoughts on Security and Intelligence.

Prof Trenta discussed the importance of digital transformation and noted that the military component used to be the driver of technological advances, but not anymore. She questioned whether it is ethical to use AI in the military field and underlined the fact that autonomous weapons have improved significantly.

Mr Di Maio noted that AI can be used for defensive and offensive reasons. We must reconsider our defense capabilities. We must collaborate, set up new models for our protection, protect classified information, analyse and explore the potential use of AI.

Mr Scaramella called his colleague Ms Armellin to present Capri Campus. They expressed their problematic on AI being a security threat. Also, intelligence can be used for terrorism, propaganda, parliamentary control … in the hands of hostile countries AI can be a very powerful weapon.

Mr Ciavoli Cortelli started his presentation by reminding us of the ‘70s and the use of satellites for non-military purposes. AI will bring about important changes, but there is the risk of using AI for non-noble purposes. Therefore, transparency is essential.

The rule of law was the subject of Ms Agliulo, who posed an intriguing question: what happens if a machine commits a crime. Contrary to a gun, an intelligent machine is not just a tool. Can a programmer control the machine? What if the machine is unpredictable? There is a need for a controller, who is responsible for the actions of the machine.

In the core of the discussion from Capri Campus lies the need to set limits to intelligence and AI, to build a perimeter set by rules, the rule of law, and ethics. Enemies do not build on transparency and ethics. We need to do so, otherwise a new middle age is near.

In their concluding remarks for the second day, Ms Travlou and Mr Roupas Pantaleon, commented that AI should be about bettering our lives, bettering society and that means going from the theoretical approach to a more practical one, always with a human face. Until tomorrow!

DAY 3

Ms Travlou welcomed participants and viewers from all over the world on the third day of the Forum.

The first session of the day focused on AI and Aerospace – The Coming Years. Prof. Papadopoulos first gave a small background of Silicon Valley, home of a government-education-defense industry triangle, backbone of technology at a global level. An interdisciplinary, intergenerational session is about to start. Space systems include multiple subsystems, with multiple disciplines.

Mr Murbach, a hands-on DBF (design, build and fly) person, presented nano-satelites and explained their relevance. In terms of technology, it is considered disruptive, in the sense that it disrupts the status quo. The confluence of “disruptions” and collaboration among different areas made it possible. It is rapidly advancing space AI/ML applications.

Ms Murbach introduced Dr Frank.

Dr Frank started by explaining the project of human return to the moon. He discussed human spaceflight mission exploration, in terms of mission operations functions, destinations and motivations for AI, and then explained the use of AI technology. He concluded by giving information on autonomous mission operation projects, where the crew in the spaceship can perform tasks that used to be performed by the ground.

Mr Gaskey introduced Dr Orchard

Dr Orchard discussed AI at private industry. Their aim is to integrate neuromorphic intelligence into computing products at all scales. There is a lot to learn from biology. These considerations led them to a new class of computer architecture, and they developed Loihi, which proves successful in terms of latency and energy efficiency. There have been significant gains in recurrent networks. Neuromorphic computing still has challenges to face, e.g. high cost, algorithm and programming models, software convergence, but presents great gains, offers great potential and excitement.

Do we trust autonomous systems?

Prof. Papadopoulos introduced Mr Woudenberg

Mr Woudenberg presented project FIDES (Frameworks for Integrated Design of Entrusted Systems). Exploring the notion of Trust, he noted the difference between trusting people and trusting systems (in systems we do not forgive!). Autonomy is an enabler with 3 dimensions: intelligence, independence, and collaborations. Trust is difficult to pin down. One can entrust a system to do something. Ethics is messy too! Verification and validation are important. The environment is also important. Reconciling trust and environment.

Q: How much of this technology is available for commercial use?

A: The framework is open, – we have a lot of technology, but how are we going to trust and use it?

Q: to what extent knowledge derived from flight is integrated?

A: depends on v and v. the flight data goes into the environmental considerations. A lot of data is required to try and work in an entrusted space.

Mr Edmond introduced Prof Hacker

Prof Hacker discussed intelligent Agent AI in Space Applications. The challenges include data volume, satellite power, communication bandwidth, cybersecurity, adaptability, and roundtrip time. But what is an intelligent agent? It is a self-contained unit that can look at its environment and commit actions based on what it sees in its environment. So, can all information become smart? Optimisation, transferability, adaptability and responsiveness, monetization of actions are among the benefits of intelligent agents.

Mr Krzesniak introduced Dr Lowry

Dr Lowry talked about a project called Neuromorphic for Space Autonomy, a project on which many people work in NASA. Neuromorphic computing is mapping neuroscience to silicon, where neuromorphic processors have outstanding power efficiency for AI and ML applications. Loihi went on Orbit about 6 months ago, 16 experiments have been carried out up to now. Benefits include power efficiency, autonomy, and low-cost exploration of the solar system.

Ms Catalogna introduced Mr Fox, but he was unavailable, so his colleague Mr Fearn made the presentation.

Mr Fearn discussed geo-intelligent. Their firm, Geospatial Insight, founded in 2012, uses machine learning models. Cat response is a core service for them. Satellite imagery and AI lead to a better risk understanding for their clients. Climate change, especially with hurricanes, has changed the scale of risks, therefore automated AI imagery (prior to and post the disaster) offers a solution. Manual damage annotation is more expensive, time consuming, may present inconsistencies between analysts and, finally, is unscalable. On the other hand, with automated damage assessment, they have classified damage into categories and then set damage classes. They annotated images and have prepared a fully operational and accurate AI-based damage assessment model. In conclusion, access to imagery combined with AI can change the insurance sector.

Prof. Papadopoulos introduced Mr Gibson, to give his entrepreneurial view on commercial space and space financing.

Q: by Papadopoulos. What are his perspectives and thoughts?

A by Mr Gibson: Developments are super-fast. Picking the right people and the right technologies for the right applications are essential. He got the space bug long ago and followed it intensely. Satellites are now made very smart. It is encouraging that technology evolves because they want to blanket their platforms with AI. Even the cost of launching a satellite has gone down.

Q: by Murbach. What are the technologies that we will see in 5-10 years?

A by Mr Gibson: Launching and ridesharing makes getting into space easier. Using less power, going to deep space, processing with agents and AI, machines that can rebuild themselves, there is the future. Earth is a spaceship, and we must take care of it.

Q by Mr Gibson: asks Murbach when his technology will be available since he wants to buy it!

Mr Murbach (laughter) says that the whole idea is that neuromorphic ships are rapidly evolving.

Q by Murbach: Where are we going to be in 15 years.

A by Jean Muyleart: first collaboration is required. Advancing the transition from data to information. But we need to be resilient against cyber-attacks. His concern is how we could ensure a safer world? Shall we be able to cope with this enemy? We must work today to realise that the quality of a pixel is very important!

Mr Gibson says that satellites can collect any data and images in the world. With AI, the 100% that is collected may come down to 2% that is useful for the customers. Big satellites can do the work of 150 small ones.

Q by Prof Papadopoulos to Mr Gibson: how do you see the implementation of these financial instruments being transferred?

A: They have spent time instructing our investors. They must make an investment commercially viable. There exist good technologies that are not necessarily commercially viable. A platform can be a better solution and space is open for business.

Q by Jean Muyleart his concern is that when you only look on ROI, this is a short-term vision, this may unbalance the scientific development side. How could this balance be ensured between short term commercial ROI to long term science and research and education.

A by Mr Murbach: We have a space industrial policy, investing billions, so there is this as a basis. So, there is a balance, though we cannot say if it is the right one.

Q by Prof Papadopoulos: NASA has received funding from the government and private industry. Now funding comes more from the private sector. Is SpaceX going to be the first to colonize Mars?

A by Mr Murbach: humans in Mars has been a goal for decades. If Apollo had continued, we would have gone to Mars in 1982 – but we didn’t. Going to Mars will be more governmental than private. With the Apollo program, the journey was more exciting than the destination. Now we can harness a lot of these new technologies to bring the cost down and do it.

Mr Gibson gave the example of a phd student who was developing a project that controls power flow on the grid – this turned into a billion-dollar firm. There are a lot of ideas out there. They are not doing R&D, they are going to buy great ideas.

Jean Muyleart final remark: space is opening a new era for the future.

Murbach: enthusiastic about the future, wished to be 40 years old again. Technology is a double-headed sword. We must be careful, not to lose the standards of living we have achieved and use AI in an intelligent manner.

Mr Gibson: impact investing is important. Growing bananas is a big issue, also coffee. We are in the business of providing information and this can help mankind, countries.

Prof Papadopoulos: acknowledges the role of the agency (NASA). We are at an era of exponential growth of space, which is the ultimate frontier.

The final session of the day was on AI and Decentralised Finance.

Mr Roupas Pantaleon introduced Mr Patel

Mr Patel introduced us into his vision to disrupt the existing business model, where people from a poor country do not enjoy the benefits of advertising etc …. Even in the ESG, they use AI, from planting to collecting data. The common credits from that are converted into tokens for everyone. As regards banking and wealth creation, people have lost loyalty to banks; there is very little engagement between customers and banks. They have a neobank, integrated into an exchange, where everyone can be his own wealth manager. Everyone has a value. They are forming a platform where they are promoting the sharing of the ad’s revenue to the community. AI/ML is used to analyse all those huge amounts of data, which no person would be able to do.

Mr Roupas Pantaleon introduced Dr Mesquita

Dr Mesquita noted that academia and the real world should work together. It seems the time has come for AI and we are facing a small revolution. It involves democratizing the power of AI. The heart of the decentralization process is decentralized finance. LOTUS INVESTMENTS Maximize value … All that is powered by AI, which has a pivotal role in driving developments. AI = Learn, predict and act – this is the real revolution ahead. With that, the whole financial system can be disrupted. Regulation is there and must evolve. NFTs (non-fungible tokens). DeFi, blockchain and AI is what we are going to see in the next years.

Mr Roupas Pantaleon said that, in his view, banking regulation has gone too far and that he believes in decentralization. But how can we trust the good side of people? How can we guarantee that fraud does not come in?

Mr Patel: we look at camels, not unicorns. The young generation has no clue about money. Centrally regulated, not controlled. There is no regulation in decentralized exchanges.

Mr Mesquita: a good number of regulators came from the banking industry. The evolution will be so fast that we will not be able to write fine print about everything. He likes the intent of Europe …… 

Mr Roupas Pantaleon asks how to keep the super-rich from benefiting from their solutions?

Mr Patel from social impact, esg, neo banking, decentralized exchange, centralized exchange, social impact. He limits the rich guys to come in, so everyone benefits.

Mr Roupas Pantaleon: But how can rich be left out? If he comes with 5 different capacities and gets small shares?

Mr Patel: AI/ML comes in and gives the solution.

Mr Mesquita: nobody is naïve and believes that banks or the super-rich will not get more money from DeFi and blockchain. He believes that the old system will not be killed. Once you can give everyone the same opportunity, then you are already closing the gap. With AI then all can have the same intelligent knowledge, and everyone can be in the same playing field. We do not reach Elon Musk, but the gap is smaller.

Mr Roupas Pantaleon thanked the discussants.

In concluding Ms Travlou commented on the vivid debate and thanked everyone for their participation. Attendance was very high, and all information will be uploaded on the site. Mr Papadopoulos thanked from Silicon Valley and gave the floor to Mr Gibson, Ms Catalogna and Mr Murbach for the final greeting.

 

Newsletter

Stay connected and get our latest news