Prompt Engineering – Unlock the Power of Generative AI


In the last couple of years, artificial intelligence (AI) has reached a new paramount by introducing generative ai and large language models (LLM), which touch almost every aspect of our lives. As Generative AI models such as GPT-3 continue to expand and evolve, one critical area that has gained increasing attention is prompt engineering.
By harnessing the power of language models, prompt engineering enables us to leverage AI systems more effectively. Prompt engineering can dramatically improve the outputs of AI models and can facilitate more meaningful human-AI collaboration.
We will cover the fundamentals of Prompt engineering, including real-world applications and its significance in today’s rapidly advancing AI landscape.

What is Prompt Engineering

Prompt engineering is intended to enhance the performance of AI systems, specifically large-scale language models (LLM), by composing effective and targeted input prompts. Language models, like GPT-3 (OpenAI), BERT (Google), and RoBERTa (FaceBook AI), are designed to generate human-like responses based on the input they receive. 
However, If you have played around with Open AI’s Chat GPT (Which uses the GPT Behind the screen), you may have noticed that it may not always produce the desired output or exhibit a deep understanding of the context.
Here, prompt engineering plays a vital role, focusing on fine-tuning the input prompts given to an LLM to achieve more accurate, relevant, and meaningful responses.  

Prompt Engineering is a process that involves the following. 

  1. Understanding the problem that you are solving
  2. Designing clear and concise prompts with explicit instructions
  3. Refining these prompts iteratively based on the generated output.

and Achieve 

  1. The maximum potential of the AI Models
  2. Improve overall performance and effectiveness of AI Models

Why Prompt engineering is important

Prompt engineering plays a very significant role in the context of Generative AI applications due to the following reasons:

  1. Context-awareness: By providing clear and concise prompts, we can help AI systems better understand the context of a task, leading to more accurate and relevant outputs.
  2. Enhanced AI performance: By composing effective prompts, AI systems can generate more accurate, relevant, and context-aware responses. A well defined prompt can improve the performance and reliability of the model.
  3. Generalization: Prompt engineering helps AI systems generalize across different tasks and domains by encouraging them to rely on their understanding of language and context instead of exploiting quirks or biases present in the training data.
  4. Adaptability: With well-designed prompts, AI systems can become more adaptable to different tasks, making them more versatile across various applications.
  5. User experience: Prompt engineering lets us create AI systems that are more intuitive and user-friendly. By understanding the nuances of human communication, these models can respond to user inputs more effectively and deliver a better overall user experience.
  6. Reduction of biases: With prompt engineering, we can guide Generative Models to produce outputs less prone to biases. AI systems can be designed to avoid perpetuating harmful stereotypes and biases by providing more precise instructions and incorporating fairness considerations.
  7. Safety: One of our major concerns about Generative AI is its safety. Crafting effective prompts can help address safety concerns associated with AI-generated content. We can reduce the likelihood of generating inappropriate, offensive, or harmful content by providing specific instructions and limitations.
  8. Interdisciplinary applications: Prompt engineering can make a significant impact across various industries and research fields, including healthcare, finance, education, and entertainment. By tailoring prompts to specific domains, AI systems can be optimized to address unique challenges and requirements in their respective fields.
  9. Rapid development and deployment: One of the most significant tasks in an AI Application development is fine-tuning a model to make it work for a specific application. Prompt engineering can accelerate the development and deployment of AI applications by reducing the need for extensive fine-tuning or training of the model, thus saving time and resources and making AI systems more accessible and cost-effective.

Connection to language models and AI systems

Language models, such as GPT-3 or BERT, are AI systems trained on vast amounts of text data to generate human-like responses based on the input they receive. These models use the context provided in the input prompt to generate appropriate output. Prompt engineering is intimately connected to these models, as the quality of the input prompt significantly influences the model’s performance and the resulting output.

By crafting effective prompts, users can better utilize the capabilities of these AI systems to deliver more targeted and accurate results.

How Prompt Engineering is Used

We saw that prompt engineering is all about developing well-crafted prompts that will help AI systems generate more accurate, relevant, and meaningful responses across various applications. In this section, we will go over few key aspects of Prompt engineering. 

A. Identifying the Problem and Desired Output

The initial step in prompt engineering is pinpointing the problem and establishing the desired output. This process involves outlining the task you want AI systems to accomplish and determining the required output format. Identifying these elements helps create a solid foundation for crafting effective prompts that guide the AI system toward the desired outcome.

B. Crafting Effective Prompts

Three key aspects should be considered while developing  prompts for AI systems

  1. Clarity and conciseness: First and foremost, ensure the prompt is clear and concise, providing sufficient context for the AI to grasp the task at hand without becoming excessively verbose or ambiguous. Straightforward and brief prompts allow AI systems to focus on the problem and generate relevant responses.
  2. Explicit instructions: Generative AI Systems are built to be generic. So it is essential to incorporate specific instructions within the prompt to steer the AI system toward the desired output. Explicit instructions can include specifying output length, required information, or the presentation format for the output.
  3. Encouraging elaboration and reasoning: A recommended strategy to generate a more insightful and comprehensive response is to prompt the system for explanations or examples that substantiate its conclusions. This can significantly enhance the quality and value of the generated output, making it more informative and useful for your specific needs.

C. Iterative Refining of Prompts

Prompt engineering is a step-by-step process that involves refining the prompts used to interact with a Generative AI system. This is done by evaluating the initial output of the AI based on the prompt, identifying areas for improvement, and adjusting the prompt accordingly. This refining process is repeated until the desired outputs are achieved, which ultimately leads to an enhancement in the performance of the AI system.

Examples of Prompt Engineering Applications

Prompt engineering has an ever-expanding wide range of applications across various industries and fields. Let us look at a few examples that demonstrate its versatility.

  1. Content generation: AI systems can be guided to create engaging and relevant content for blogs, social media, and marketing materials. Specific prompts can outline the topic, target audience, and desired tone to ensure the generated content aligns with the intended purpose.
  2. Sentiment analysis: AI systems can more accurately detect sentiment behind a piece of text—such as positive, negative, or neutral—when given well-crafted prompts. This capability can be leveraged in understanding customer feedback or analyzing social media trends.
  3. Question answering: AI-powered chatbots and virtual assistants can benefit from effective prompts that enable them to provide more accurate and contextually relevant answers to user questions. This improvement leads to better user experiences and increased trust in AI systems.
  4. Data labeling: Labeled data is critical for training machine learning models. Prompt engineering can help AI systems generate more accurate and consistent labels for datasets, streamlining the data preparation process and improving model training.

Prompt engineering plays a vital role in maximizing the capabilities of large language models across various industries. Users can generate more relevant and accurate outputs by crafting specific prompts that align with the intended purpose. The applications of prompt engineering will continue to expand and shape the future of AI.

Significance of Prompt Engineering 

Prompt engineering has become a crucial technique in the ever-evolving landscape of artificial intelligence. It is vital in enhancing AI capabilities, reducing biases and safety concerns, facilitating human-AI collaboration, and revolutionizing various industries and research fields.

Prompt engineering shapes how we interact with AI systems, enabling us to generate more relevant and accurate outputs by creating specific prompts that align with the intended purpose. As a result, prompt engineering has become integral to maximizing the effectiveness of large language models.

As AI continues to advance, prompt engineering will play an increasingly important role in shaping the future of AI and unlocking new possibilities across various industries. By reducing biases and facilitating human-AI collaboration, prompt engineering can improve the quality of life and work for people worldwide.

A. Enhancing AI Capabilities

Prompt engineering empowers AI systems to perform at their full potential by guiding them to produce more accurate, relevant, and context-aware responses. By optimizing input prompts, we can unlock the true capabilities of AI systems, leading to better performance and more reliable results.

B. Reducing AI Biases and Safety Concerns

One of the significant challenges in AI development is mitigating biases and addressing safety concerns. Prompt engineering offers a way to guide AI systems in generating outputs less prone to biases and stereotypes. By incorporating fairness considerations and more precise instructions, we can create AI systems that promote ethical use and avoid perpetuating harmful stereotypes.

C. Facilitating Human-AI Collaboration

Prompt engineering is essential for building AI systems that seamlessly collaborate with humans. By designing more intuitive and user-friendly prompts, AI systems can better understand and respond to human inputs, leading to more effective communication and cooperation. This enhanced collaboration ultimately results in a more satisfying user experience.

D. Impact on Industries and Research Fields

Prompt engineering has a transformative impact on various industries and research fields, with applications spanning from medicine to entertainment. Here are a few key sectors where Prompt engineering is making a difference:

  1. Medicine: Prompt engineering can help AI systems deliver more accurate diagnoses, recommend personalized treatment plans, and synthesize complex medical information for patients and healthcare professionals.
  2. Finance:  Promost engineering can help AI systems to improve risk assessment, fraud detection, and investment analysis. By crafting targeted prompts, AI can deliver more accurate predictions and insights, enabling better decision-making.
  3. Education: Prompt engineering can guide AI systems in creating personalized learning plans, providing instant feedback on assignments, and assisting educators in identifying areas where students need additional help.
  4. Entertainment: In the entertainment industry, AI systems can leverage prompt engineering to generate engaging content, create realistic virtual worlds, and develop personalized user recommendations.

The significance of prompt engineering in the current world is immense, as it continues to redefine our interactions with AI systems and push the boundaries of what AI can achieve. By mastering prompt engineering, we can unlock new possibilities and drive advancements in various industries, ultimately shaping a more innovative and connected world.

Challenges and Limitations

Despite its transformative potential, prompt engineering has challenges and limitations. In this section, let us explore the inherent biases in language models, the difficulty in achieving precise control, and the issues surrounding scalability and generalizability.

A. Inherent Biases in Large Language Models

Language models are trained on vast amounts of text data, often containing biases and stereotypes in the real world. Consequently, these biases may inadvertently influence AI systems when generating responses. While prompt engineering aims to reduce biases and create fairer AI systems, it cannot entirely eliminate the inherent biases present in the language models themselves. Addressing this challenge requires a multifaceted approach, combining prompt engineering with advances in model training and data curation to minimize biases and ensure ethical AI use.

B. Difficulty in Achieving Precise Control

Since most of these AI Models are very generic, achieving precise control over AI-generated outputs is often challenging. While well-crafted prompts can guide AI systems toward more accurate and contextually relevant responses, attaining complete control over the generated content remains difficult. Even with carefully designed prompts, AI systems may still produce unexpected or undesirable outputs. This limitation will require continuous refinement of prompts and ongoing research into better techniques for controlling AI system behavior.

C. Scalability and Generalizability Issues

Prompt engineering is an iterative process often involving trial and error, making it time-consuming and resource-intensive. This approach can raise scalability issues, particularly when working with large-scale AI systems or applications requiring numerous prompts. Moreover, crafting effective prompts for one specific task or AI system may not guarantee generalizability to other tasks or systems. Hence there is a need to strike a balance between creating customized prompts for each use case and developing general strategies that can be adapted across various use cases.

While prompt engineering has the potential to revolutionize our interactions with AI systems, it is essential to acknowledge and address its challenges and limitations. By understanding the inherent biases in language models, working towards achieving precise control, and addressing scalability and generalizability issues, we can continue to refine and advance prompt engineering techniques, ultimately unlocking new possibilities in the world of artificial intelligence.

Exploring New Horizons: Future Directions and Opportunities in Prompt Engineering

As we continue to unlock the potentials of Generative AI, several promising future directions and opportunities await, offering exciting prospects for further advancements. This section will see what the future holds for Prompt engineering. 

A. Research Advancements in Prompt Engineering

As AI systems and language models continue to evolve, research is needed to develop more effective and sophisticated prompt engineering techniques. Future advancements in this domain could include

  • Creating new methods for optimizing prompts.
  • Developing AI-assisted Prompt engineering tools.
  • Exploring techniques that allow for precise control over AI-generated outputs.

I believe these research advancements will help overcome existing challenges and limitations.

B. Interdisciplinary Collaborations

Collaboration across various fields, including linguistics, psychology, and computer science, is a new development area in prompt engineering. Experts from different disciplines can combine their perspectives and expertise to create more effective prompts that account for diverse contexts and nuances. Such collaborations can lead to innovative solutions that address biases, ethical considerations, and usability concerns, driving the field of prompt engineering forward.

C. Open-source Initiatives and Community Involvement

Open-source initiatives and community involvement are crucial for the growth and development of prompt engineering. Researchers and developers can share resources, knowledge, and tools to advance the field, identify best practices, and promote innovation. Open-source initiatives can also facilitate the adoption of prompt engineering techniques by developers and organizations worldwide. Encouraging community involvement ensures that diverse perspectives and experiences are considered.

Prompt engineering holds immense promise, with opportunities for research advancements, interdisciplinary collaborations, and open-source initiatives. Collaboration and innovation can shape the future of AI, unlocking new possibilities in various industries and fields. As we look ahead, the potential for prompt engineering to transform our interactions with AI systems is exciting, paving the way for a more connected and intelligent world.

Final thoughts

In conclusion, prompt engineering is a critical aspect of maximizing the effectiveness of large language models, and its potential to transform our interactions with AI systems is fascinating. Creating specific prompts that align with the intended purpose allows users to generate more relevant and accurate outputs, leading to better user experiences and increased trust in AI systems.
Furthermore, interdisciplinary collaborations, open-source initiatives, and community involvement are crucial for the growth and development of prompt engineering. By combining expertise from different fields and sharing resources, knowledge, and tools, we can collectively advance the field and unlock new possibilities across various industries and research fields.
As AI continues to evolve, prompt engineering will play an increasingly important role in shaping the future of AI, and there will always be new developments and techniques to explore. By adopting these opportunities and fostering a spirit of collaboration and innovation, we can continue improving the quality of life and work for people around the world through AI.

Resources to learn more about Prompt Engineering

A Comprehensive Guide on Artificial Intelligence and Machine Learning for Beginners


Artificial Intelligence and Machine Learning are gaining lots of traction for a good reason. There has been significant development in both. Today, AI and ML are two critical technologies responsible for digital transformation worldwide.

This is the guide if you are new to AI and ML. I have compiled a comprehensive guide for beginners to discuss the two technologies and their phenomenal applications.

So, let’s dive in.

What is Artificial Intelligence?

Artificial Intelligence is a technology mimicking human intelligence. It enables computer applications to learn from experience through algorithm training and iterative processing. The interaction makes the system competent enough to predict and complete the given task.

AI is more efficient in completing a given task than humans. They become more competent and faster with an interactive process, making them the best choice for comprehensive and intelligent decision-making.

AI is an extensive branch of computer science associated with building intelligent machines that can perform tasks that require human intelligence. Machine Learning, on the other hand, is a type of AI that enables software applications to be more accurate at analyzing and predicting outcomes.

AI has influenced consumer products and has been responsible for many breakthroughs in physics and healthcare.

AI’s importance and components have been known for a long time now. They are taken as techniques and tools that make the world better. You do not always have to go to fancy tech gadgets to use them. AI simplifies or even reduces human efforts. With AI, humans don’t have to deal with mundane tasks anymore. 

AI is more efficient, less error-prone, and can more precisely complete its tasks with minimal human intervention. 

What is Machine Learning?

Machine Learning is a branch of AI where a computer learns from data. Like us, human beings learn by various means such as books, training, podcasts, etc. A computer learns from data using statistical methods and algorithms.  

Enabling machines to think like humans isn’t as easy as it sounds. A Strong Artificial Intelligence is possible only through machine learning.

Machine learning is a procedure where a system analyses a large amount of historical data and identifies trends and correlations between data points using statistical methods. The system uses these learnings and builds algorithms to make accurate predictions.

The key to a successful Machine learning solution is the availability of reliable and relevant data. These days, almost all organizations build sophisticated data pipelines to extract, load, and transform data that could be used to create Machine Learning algorithms. 

What is the Significance of AI and ML in Present Times?

AI and ML are used widely by large and small business organizations to increase their efficiency and advance innovation. Studies have shown that 41% of the companies rushed their AI rollout in 2021 due to the pandemic. 31% of the companies already had AI in production or are piloting AI technologies before a pandemic.

Here’s why AI and ML are so significant in present times,

  • Healthcare: In Health care, AI is used for diagnosing and predicting treatments for patients. AI is extensively used during drug development, which helps speed up drug development and reduce cost. 
  • Finance: Trading brokerages, fintech companies, and banks use MI algorithms for automating trading and providing financial advisories to investors.
  • Retail: Retail chains use ML algorithms to develop AI engines that provide relevant product suggestions based on the customers’ past buying habits and geographic, demographic, and historical data.
  • Scam and Fraud Detection: Artificial Intelligence has emerged as an effective tool for avoiding financial crimes due to its increased efficiency. Banking institutions use ML to analyze vast numbers of transactions to uncover fraud trends, subsequently used to detect fraud in real-time.
  • Data Security:  ML models can quickly identify data security risks before they become breaches. It goes through previous experiences to predict high-risk activities.

What is the Difference Between Artificial Intelligence and Machine Learning?

ML and AI are often used interchangeably, but they are not the same. Let’s take a look at the difference between the two.

Artificial IntelligenceMachine Learning
Enables Machines to simulate human behavior. A subset of Al allows machines to automatically learn from past data without explicit programming.
 An innovative intelligent system behaves like humans to solve complex problems.The AI system constantly learns from external data and the predictions it made before. (Self-Correction aka Re-enforcement learning)
The objective of AI is to build a system that will be able to perform complex tasks without any manual intervention. The objective of ML is to create a system that can predict or perform a specific task for which the system was trained using data from the past.
The AI system constantly learns from external data and their previous predictions. (Self-Correction aka Re-enforcement learning)ML learnings are limited to the data which the system already knows. Re-enforcement learnings must be manually performed  
Examples:
Voice Assistants such as Siri and Alexa Self-Driving cars, Modern Games
Examples:
Recommendation Systems, Image recognition, Speech Recognition, etc.  
Subsets
Machine Learning.
Deep Learning.
Natural Language processing.
Robotics.
Machine Vision.
Subsets
Deep Learning.



Artificial Intelligence vs. Machine Learning

How are AI and ML Changing the World?

In this modern age, AI and ML dramatically improve our efficiency and make the world better. The chances are very high that you may have unlocked your device using facial recognition even for reading this blog. The usage of AI in our daily life has gone up exponentially. Let’s take a quick peek at a few areas where AI brings dramatic changes to our day-to-day lives. 

Medicine and Healthcare

The Modern medical industry heavily relies on Artificial intelligence. Here are a few examples that will help you understand how AI and ML radically change modern medicine and healthcare.

Diagnosis – Based on the Agency of Healthcare Research and quality study, misdiagnosis was the primary cause of 10% of the patient’s death. Currently, Machine learning is used to determine the precise location of the tumor and determine whether it’s malignant or benign with around 88% accuracy. AI also played a vital role in speeding up the development of the COVID 19 Vaccine. 

Robotic Surgeries – Though completely autonomous robotic surgeries are not a reality. AI and ML help in surgical pre-operative planning, decreasing surgical trauma, etc. Modern AI-Driven surgical robots take over mundane tasks, which allow surgeons to focus on complex aspects of the surgery. (More can be read here.)

Precision Medicines – Traditionally, medicines are prescribed to patients based on their symptoms (symptom-driven). They are often generic. While in Precision medicines are medicines that use information about a person’s own genes or proteins to prevent, diagnose or treat disease (cancer.gov). AI  and ML play a vital role in determining Precision medicines. 

The FDA approved artificial technology in 2018 to detect diabetic retinopathy by scanning the patient’s eye. The system can operate independently, and even a low-skilled worker can easily take the scans. These types of systems can lead to faster and more accurate diagnoses.

AI can also be used to make drug discovery less expensive and faster. With the help of research papers and clinical trials, the technology can easily detect candidate compounds that react with the pathology of a specific disease. AI drug discovery system compares samples from a patient with and without the disease. It can help discover new details about a specific disease.

Agriculture

According to a study conducted by Michigan State University, the global population is expected to reach 9.2 Billion by 2050. We need to bring radical innovation to ensure enough produce to feed all with the limited farmland. Artificial Intelligence and Machine learning play a vital role in ensuring Profitability, sustainability, efficiency, and productivity in the agricultural sector with decreased manpower. 

Below are a few examples of a few use-cases of AI along with IoT devices such as soil sensors, cameras, and Drones in the agricultural industry. 

Crop Selection – AI is used to select the high potential seeds and predict the best time for planting and harvesting, thus providing the maximum return on crops and improving efficiency. 

Monitoring –  AI (Computer Vision) is used for in-field monitoring of corps to detect and deter pests, weeds, and plant diseases. Drones can monitor vast areas of agricultural lands quickly and efficiently and generate reports. These reports can also be automatically fed into Autonomous robots to take immediate corrective actions. 

Precision Application of water and pesticides –  With Drones, GPS, and computer vision, farmers can apply pesticides, weed control, and irrigation more precisely. This reduces the wastage up to 90%, which drastically reduces the cost and improves efficiency. Another advantage of the precision application is a reduction in environmental pollution. 

Autonomous Robots –  Robots are used by farmers to assist them in analyzing and carrying out multiple activities such as harvesting. With the help of computer vision, these robots can detect the plants ready for harvesting and carry out the harvesting quickly and efficiently. (Read more)

Soil Monitoring and Analysis – Soil fertility is heavily dependent on parameters such as pH level, temperature, moisture content, humidity, carbon level, etc. IoT devices such as soil sensors can be used to collect these data, and AI can prescribe timely corrective action. 

Predictive Weather Modeling. 90 % of crop losses are due to weather events. Crop yields are affected by temperature, humidity, and rain. Detailed real-time and predictive weather information helps farmers make informed decisions to increase the maximum corps return. 

Banking and Financial Institutions.

Around 40% to 50% of banks and financial institutions worldwide are using AI to optimize at least some part of their offerings. Let’s take a quick peek at a few of the usages of AI in Fintech.  

Credit decisions – Traditionally, the lenders use just the borrower’s credit score to make credit decisions. However, with AI and ML, lenders can detect potential risk factors that may not be too obvious. AI can also speed up decision-making with less human intervention, thus reducing the fee and better interest rates. (Read More)

Mortgage and Loan Processing – Natural language processing (NLP) can improve the efficiency and accuracy of gathering, reviewing, and verifying supporting documents. This can drastically improve the time taken for underwriting. 

AI-Driven Trading – AI can analyze historical data and other relevant factors such as sentiment analysis to make accurate predictions about stock prices and perform automated trading based on investors’ long- and short-term investment strategies. 

Real-time Risk Management – AI and ML are used by financial institutions to quickly churn through vast data points to identify, assess, and mitigate risks. (Read More)

Fraud Detection and Prevention – AI Systems can monitor users’ location and transaction history and derive their spending habits. Any transactions that don’t fit their spending habits can be blocked or flagged to prevent fraudulent transactions.

Personalized Banking – AI can identify the customers’ needs based on their banking history, types of accounts, balance, social profiles, etc., and provide them with the most relevant financial product. This can be done through their websites, IVR systems, Mobile Apps, or Chatbots. 

Security – AI-enabled Banking systems can use Bio-metrics or voice to authenticate and authorize users while performing banking operations.

Energy Sector

Like Agriculture, the energy sector also faces challenges due to the rise in demand. As more and more nations develop, their energy consumption is also rising. More and more energy companies rely on AI to achieve sustainability and efficiency. 

Let us look at a few use cases here. 

Renewable Energy –  Most common renewable energy is derived from solar panels and windmills. Accurate weather prediction and historical weather data can improve the accuracy of energy generated from solar and wind. This helps in the efficient usage of conventional energy sources such as generators. 

Fault Prediction – Energy companies are using the data from various sensors on the electrical grid to generate AI Models that can accurately predict issues in the grid. This helps to prevent power outages. 

Nuclear Energy –  The nuclear energy sector is also tapping into the benefits of Artificial Intelligence and Machine Learning. It uses the technologies to streamline and optimize nuclear power plant maintenance and operation. The technologies are helping with advanced nuclear power technologies.

There has been news about DeepMind training an AI to control nuclear fusion. DeepMind is an artificial intelligence company backed by Google. Scientists previously used magnetic coils to configure nuclear fusion reactions by nudging them into the perfect position and shaping it like a potter shapes a lump of clay on the wheel. The coils have to be controlled to prevent plasma from touching the corners of the vessels and damaging the wall. Thus, the fusion reaction is reduced. However, whenever the researchers look to change the plasma configuration and try some new shapes, it will yield cleaner plasma or more power. It requires a large amount of design and engineering work. DeepMind has come with an AI that can control the plasma autonomously. It is more suitable for controlling plasma.

Types of Artificial Intelligence

Artificial Intelligence systems can be classified in two ways. They are the following. 

  • Classification based on functionalities 
  • Classifications based on capabilities. 

Classification based on functionalities

1. Reactive Machine

These are the basic form of AI,  Reactive Machines, as the name suggests, reacts to a certain condition. They function as they are programmed and will always respond to an identical situation in an identical way. These systems work with the data that was provided to them and will not have past memory (what it did in the past). These systems are good at the task for which it is programmed for, and will not be able to perform anything else.  

Some great examples of Reactive Machines are Netflix recommendation engine or spam filters. They don’t interact with the world, but they respond to identical situations, in the same manner, every time the AI encounters the same scenarios.

2. Limited Memory

AI systems with limited memory are able to use past experiences to inform their current decisions, but they can only retain a certain amount of information. These systems are able to remember certain events or actions that have occurred within a specific timeframe and use them to shape their current behavior. For example, a self-driving car with limited memory may remember a particular route it has taken in the past and use that information to navigate a similar route in the future. However, it will not be able to remember every route it has ever taken and may not be able to adapt to significantly different routes.

Limited memory AI systems are often used in practical applications, such as virtual assistants, customer service chatbots, and language translation systems. They can remember specific interactions and use that information to provide more personalized responses or improve their performance over time. However, their ability to retain and use past experiences is limited. They cannot adapt to significant environmental changes or learn new tasks as quickly as more advanced AI systems.

3. Theory of Mind

The theory of mind refers to the ability to understand and infer other agents’ thoughts, beliefs, and intentions (human or machine). It is a crucial component of human social cognition and is essential for understanding and predicting the behavior of others. AI systems with a theory of mind can understand that other agents have their own perspectives and can anticipate their behavior based on that.
For example, an AI system with a theory of mind may understand that a person wants to go to a particular destination and can infer their intentions based on their past behavior and the current context. It could then use this information to suggest a route or provide directions.
Theory of mind is a relatively new area of research in AI and has not yet been widely implemented in practical applications. However, it has the potential to improve the performance of AI systems in a variety of tasks, such as natural language processing, decision-making, and social interaction.

4. Self Aware

Self-aware AI refers to artificial intelligence systems with a sense of self and can reflect upon their own thoughts and actions. These systems can understand their own limitations and can learn and adapt over time. They may be able to introspect on their own mental states and understand the relationships between their thoughts, emotions, and behaviors.
Self-aware AI is still in the research stage and has not yet been widely implemented. But it has the potential to significantly improve the performance of AI systems in a variety of tasks, such as decision-making, problem-solving, and social interaction.
For example, a self-aware AI system may be able to understand its own limitations and seek out additional information or resources to help it solve a problem.

Self-aware AI is a highly complex and controversial topic, and there is an ongoing debate about the feasibility and ethical implications of creating truly self-aware artificial intelligence.

Classification based on the capabilities

AI can also be classified based on its capabilities into the following categories:

1. Artificial Narrow Intelligence

Narrow AI (also known as Weak AI) is a type of artificial intelligence designed to perform a specific task or function and cannot adapt to new tasks or situations. It is typically used in practical applications where it can be trained to perform a specific task with a high level of accuracy.
Examples of narrow AI include virtual assistants like Siri and Alexa, which are designed to answer questions and perform specific tasks, such as setting reminders or playing music. These systems are able to understand and respond to natural language input and can perform a wide range of tasks, but they are not able to adapt to new tasks or situations on their own.
Narrow AI is widely used in a variety of applications, including customer service chatbots, language translation systems, and image and speech recognition systems. It is particularly useful for tasks that require a high level of accuracy and repeatability but are not significantly impacted by changes in the environment or the need to adapt to new tasks.

2. Artificial General Intelligence

General AI (also known as Strong AI) is a type of artificial intelligence designed to perform any intellectual task that a human can perform. It can learn and adapt to new tasks and situations and exhibit human-like intelligence and decision-making abilities.
General AI is still in the research stage and has not yet been fully realized. However, it has the potential to revolutionize a wide range of fields by enabling machines to perform tasks that currently require human intelligence, such as problem-solving, decision-making, and social interaction.

3. Artificial Super Intelligence

Superintelligent AI, also known as artificial superintelligence or ASI, refers to a hypothetical future AI that would be significantly more intelligent than any human being. It would be able to perform tasks and make decisions that are beyond the capabilities of even the most intelligent humans.
There is currently no known way to predict precisely what such an AI would be capable of. Still, it would surely have the potential to revolutionize society and bring about significant technological advancements. Some experts have expressed concern about the potential risks of developing superintelligent AI, including the possibility that it could pose a threat to humanity if it were to be programmed with goals that are incompatible with human values. However, others believe that the development of superintelligent AI could bring significant benefits and help solve some of humanity’s most pressing challenges.

Types of Machine Learning

We learned that Machine learning is a subfield of artificial intelligence (AI) that involves the development of algorithms and models that can learn from data and improve their performance over time. These algorithms and models are able to learn without being explicitly programmed and can adapt to new data and situations. There are several types of machine learning, including supervised, unsupervised, semi-supervised, and reinforcement learning. Let us take a look at a few of the Machine learning types.

1. Supervised

AI models are created using supervised ML. Experts in this subject matter train the models. Thus, they are known as SMEs or subject matter experts. The models study the newly input data and label them as responsive or unresponsive. They can tag the data as associated with several kinds of issues. With these ML models, you can expect to get more similar content
Usually, supervised machine learning models are used for predictive analysis. Based on the previous experience, to judge the decisions taken by SMEs regarding the documents which have been reviewed. It has been developed using loose artificial neural models designed like the human brain. The assessment they make helps in future prediction and forecasting.
A few examples of supervised learning algorithms are Decision Trees or Linear Regression.

2. Unsupervised

Unsupervised ML is also used for creating AI models. From the name itself, it becomes evident that it features more automation. The models are trained by software and processes mimicking the training offered by people. Unsupervised ML models can categorize the data you have input. It is also capable of identifying trends or patterns even without any training. The model can be used for classifying content or summarizing any content.
Some real-life examples of unsupervised learning are Customer Segmentation or Customer Groups around which a marketing campaign is developed.

3. Semi-Supervised

Semi-Supervised machine learning is the middle ground between the above two models. So, they are a combination of the two approaches. SMEs label a small amount of data for starting a training mode. The two previous models are combined to come up with a model that can be used for predictive as well as descriptive purposes.
Text document classifier is a common application of semi-supervised learning.

4. Reinforcement Learning

Reinforcement learning is machine learning in which an agent learns through trial and error by interacting with an environment and receiving rewards or punishments based on its actions. In reinforcement learning, the agent’s goal is to maximize the reward it receives over time.
The agent takes action in the environment, and the environment responds by giving it a reward or a punishment. The agent uses this feedback to adjust its behavior and improve its performance. For example, an AI agent playing a video game might receive a reward for defeating an enemy or completing a level, and it would learn to repeat actions that lead to these rewards.
Reinforcement learning algorithms use various techniques, such as value iteration and policy iteration, to learn the optimal behavior for a given task. These algorithms are used in a variety of applications, including robotics, autonomous systems, and games.

Machine Learning and Artificial Intelligence Innovation Will Drive Innovation in the Future

Machine Learning models can detect patterns to offer insights. Research by Statista showed that the global AI market is expected to increase to $126 billion by 2025. But AI doesn’t just come with growth opportunities. It can also lead to the disruption of many industries, as promised by machine learning.

AI and ML will give future business leaders better decision-making power. Thus, it will enable researchers to look at problems from various perspectives and offer insights all the time. It is something that humans can’t conceptualize. So, technologies are the best allies of humanity in the future.

AI also comes with great market opportunities. It can adjust itself to every wave of subsequent disruptions. About 52% of the companies, on the other hand, have accelerated their AI adoption plans due to the onset of the pandemic. These had a significant effect on workplaces and businesses across the world.

With the adoption of analytic technologies, organizations are learning more about their world and themselves. ML adoption is making people of every level ask questions that challenge what the organization believes it knows about itself.

Whether rocky or rosy, the future is coming, and AI will play a significant role in it. With the development of technology, the world will see new business applications, brand-new startups, and consumer uses. Indeed, it will lead to the displacement of many jobs, but it will also create some new ones. Along with IoT, AI and ML can potentially remake the economy. However, how the technologies will impact the world is yet to be seen.

Getting Started with Ethereum Blockchain Development Part 1


We will walk through the process of setting up a Blockchain development environment in this blog.

In this exercise, we will do the following

  • Setup a minimal Private Ethereum Blockchain Network with one mining node and one transaction node in Microsoft Azure
  • Setup the basic smart contract development environment on your local computer
  • Write and deploy a simple sample contract in the local machine
  • Deploy the same in the Private Ethereum Blockchain Network

We have a lot to do, so let’s get started …

If you choose to skip the Private Ethereum Blockchain creation and wanted to just test in your local environment you can skip the Step 1 of this Excercise.

Step 1: Setting up a Private Ethereum Blockchain in Microsoft Azure

Setting up your Azure account is beyond the scope of this exercise, it is free to sign up, and you receive a 100$ credit when you set up your Azure account which will be more than enough for you to complete this tutorial

  1. Go to https://portal.azure.com and click on Create a Resource and select Blockchain, from the Featured, choose Eethereum Consortium Blockchain option
    Microsoft Azure Create BlockChain
  2. In the Basics page Enter the following and Click OK
        • Resource prefix
          Put a String  that will be used as a base for naming all the resources that will be created for this tutorial
        • VM username
          Leave the default which is gethadmin
        • Authentication type
          Let’s make it simple by selecting Password option
        • Password
          Type in a password that you will remember
        • Resource Group
          Create a new Resource group and name it
        • Location
          Select an Azure location of your choice

      Basics - Microsoft Azure Setup1

  3. In the Network Size and Performance Page Enter the following and Click OK
    • Number of Consortium members: 2
      This means the number of members of this blockchain. Each of this participants gets a mining node.
    • Mining Nodes: 1
      This means you will have 1 mining node per participants in this Blockchain. This node will be responsible for mining blocks in your blockchain
    • Mining Node Storage Performance: Standard
      Selecting the default and cheapest option
    • Mining node Storage replication: LRS
      Select the default value
    • Mining node virtual machine size: 2X Standard D1 V2
      We do not want a very powerful machine for this tutorial so let’s get a small one
    • Transaction Nodes: 1
      This node will be submitting its transactions this Blockchain
    • Transaction Node Storage Performance: Standard
      Selecting the default and cheapest option
    • Transaction Node Storage replication: LRS
      Select the default value
    • Transaction Node virtual machine size: 1X Standard D1 V2
      We do not want a very powerful machine for this tutorial so let’s get a small oneBasics - Microsoft Azure Setup2
  4. In the Ethereum Settings Options enter the following and Click OK
    • Network ID: 111222333
      This id is used to name this private Blockchain network, Only the nodes with the same id can peer with each other.
    • Custom Genesis Block: No
      Custom Genesis Block can be provided as a JSON if you are creating a Blockchain for a specific use. Since its the basic tutorial lets allow the mining node to create our Genesis Block.
    • Ethereum Account Password: <Enter a password of your choice>
      This will be the password of default ethereum account. Note this password down; we will use this later
    • Ethereum private key passphrase: 
      This passphrase will be used to create the private key of the default ethereum accountBasics - Microsoft Azure Setup3
  5. View the Summary on the next page, Click Ok
  6. Accept the Terms and Conditions and Click OK
  7. Let Azure do its job and wait for approximately 10 mins to see your Blockchain Network Completely in the Azure Portal
  8. Click on the Name of the Resource Group that you created -> Deployment and select Microsoft-azure-blockchain.azure-blockchainXXXXXXXXX to look at the details of your blockchainDeployment history - Microsoft Azure
  9. Note down the following information from the Details page
    • Admin Site URL
    • ETHEREUM-RPC-ENDPOINT
    • SSH-TO-FIRST -TX-NODEmicrosoft-azure-blockchain.azure-blockchain-servi-
  10. Copy paste the URL of admin site into your browser, and you should see something similar to the below imageBlockchain Admin
  11. Our Blockchain Network is up and running. However, we need to we need to unlock the default account of this Blockchain to make any modifications to it. To unlock the default account, we have to SSH into the Transaction node.
  12. Open the Terminal on your Mac or Linux system or use Powershell in windows to SSH. Use the SSH-TO-FIRST -TX-NODE value that we copied from our previous step to Open an SSH Connection.
    Ignore the warning that you will receive when you first SSH to the transaction node about the authenticity of this host.Use the password that we provided for Ethereum Account Password: in the previous stepSSH 1
  13.  Type in the Following Command to connect to Geth in the Transaction node.
    $ geth attach
    

    Geth is a javascript runtime environment (JSRE) provided by Ethereum to interact with Ethereum. You can find more details about Geth here

  14. In the Geth Console type in the following command to unlock the default account.
    personal.unlockAccount(eth.coinbase)
    

    eth in this context is a shortcut for web3.eth

    SSH 2

  15. This will ask for the Passphrase, Provide the passphrase we created in the previous step. You should get a response “true” if the passphrase is correctSSH 3

That’s it your Ethereum Private Blockchain is ready to roll

Step 2: Setting up your Local Development Environment with Ethereum Essentials

Lets now set up your local development environment with the essentials. We will need to do the following.

  1. Download and install Visual Studio Code from https://code.visualstudio.com
    Install the following plugins for Visual Studio Code
    VSCOde
  2. Download and install NPM (Node Package Manager)
    Go to https://nodejs.org/en/ and download the latest LTS Version of Node. At the time of writing this blog 8.9.4 is the current version, so we are going with that.Node.js
  3. Install the following Node packages using NPM (For opening npm use console in mac and  Linux or use Powershell with option Run as Administrator in Windows)
    1. windows-build-tools (Only on Windows)
      Windows-build-tools are used to compile native node modules in windows. you can find more information about windows-build-tools at https://www.npmjs.com/package/windows-build-tools

      npm install --global --production windows-build-tools
      
    2. truffle
      Truffle is a development environment for ethereum; We will use this to create and compile our smart contract. you can find information about truffle at https://www.npmjs.com/package/truffle
      Use the following command for windows

      npm install -g truffle
      

      and this one for Mac or Linux

      sudo npm install -g truffle
      
    3. Ethereum-testrpc
      testrpc is an Ethereum client used for testing and development. More information about testrpc can be found at  https://www.npmjs.com/package/ethereumjs-testrpc
      Use the following command for windows

      npm install -g ethereumjs-testrpc
      

      and this one for Mac or Linux

      sudo npm install -g ethereumjs-testrpc
      

Step 3: Create your first Smart Contract

In this step, we will create a smart contract, test it locally in testrpc and deploy it to your newly created Blockchain network and run it.
We will be writing this contract using Solidity. Learn more about Solidity and its syntax at https://solidity.readthedocs.io/en/develop/index.html

The Smart Contract that we are going to just print HelloWorld. We will use Truffle to do the following

  • Initialize the project
  • Add a Smart Contract
  • Compile the contract
  • Deploy the contract to local testrpc server and to the Private blockchain
  1. Setting up the Project

    1. Create a new folder called “HelloWorld” on your local machine and open the PowerShell or terminal window pointing to HelloWorld folder
    2. Initiate the Project by calling the unbox command.
      $ truffle unbox
      

      Truffle Boxes are boilerplates that you can use for kickstarting your development. You can read more about truffle boxes from http://truffleframework.com/boxes/
      unbox command without any arguments creates a default MetaCoin contract.
      Since we are not dealing with MetaCoin in this exercise, we will clean this project up in a later step. Open the folder using the Visual Studio Code, and you should see a structure similar to this.
      HelloWorld 2018-03-03 14-19-15

  2. Create a new Smart contract

    You can either create a contract manually, or you can use truffle to create a new contract. We are going to use truffle to create the new contract.
    Type in the following command in the terminal / PowerShell

    $ truffle create contract HelloWorld
    

    If you look in the contracts folder of your project, you can see that Truffle had created a new Solidity file HelloWorld.sol Another Option is to create a new file manually in contracts folder with the name HelloWorld.sol

    Your Contract name must match with the Solidity file that you create for that contract.

    HelloWorld.sol — HelloWorld 2018-03-03 14-22-29

  3. Write the Logic of our Smart Contract

    Truffle “create contract” has created a basic skeleton for our contract. Since the logic of our smart contract is very simple to copy paste the following code to the HelloWorld.sol

    pragma solidity ^0.4.4;
    
    contract HelloWorld {
      function HelloWorld() {
        // constructor
      }
    
      function sayHello() public returns (string) {
            return ("Hello World!!!");
      }
    
    }
    

    Before we compile our contract, Let’s clean up our project. Delete the following files from the contacts folder.

    1. ConvertLib.sol
    2. MetaCoin.sol
  4. Compile our Smart Contract

    Since we are using truffle to compile our contract, type in the following command  in the terminal to compile our smart contract

    $ truffle compile
    

    Truffle will compile the code and will generate a build folder in the project. If Examine the build/contracts folder You will find HelloWorld.json which has the deployable binary.

  5. Deploy Smart contract in local testrpc

    We are using truffle to deploy our contract, to let the truffle know that we are deploying this contract in our local folder we first have to make sure that we have localhost as a destination in the truffle.js file. You can find the truffle.js file in the root folder of your project. Truffle unbox command should have pre-populated with the localhost network. Open your truffle.js and make sure that you have the following entry.

    module.exports = {
      networks: {
        development: {
          host: "localhost",
          port: 8545,
          network_id: "*" // Match any network id
        }
      }
    };
    

    Truffle requires having a “Migrations contract” to use the migrate feature. You would have noticed that we did not delete “Migrations.sol” in the contracts folder when we did the clean up in the previous s step. Open 2_deploy_contracts.js from migrations folder. This is the file that truffle uses to determine the contracts to deploy,  Clean up this file to remove the reference to the files that we deleted and add a reference to our HelloWorld Contract. Change the 2_deploy_contracts.js  to this.

    
    var HelloWorld = artifacts.require("./HelloWorld.sol")
    module.exports = function(deployer) {
      deployer.deploy(HelloWorld);
    };
    
    

    We now need to start our local testrpc server. Open a new Terminal / PowerShell window and type the following command

    $ testrpc
    

    That will start your testrpc server, testrpc comes with 9 accounts, and the first one is the default account. The private keys listed below are the private keys of the accounts. testrpc runs on port 8545

    testrpc

    Run the following command in the terminal to deploy the contract to testrpc server.

    $ truffle migrate
    

    truffle migrate
    If you see a screen similar to this means your contract has been successfully published to the testrpc server.

  6. Execute the Smart Contract

    We will use truffle console to execute our HelloWorld contract.
    Truffle console is a javascript console that can be used to interact with smart contracts.
    Type the following command on the terminal to open truffle console.

    $ truffle console
    
    1. Since this is a javascript console define a variable that will hold the reference to the contract.
      var hwcontract
      

      Just like the Google Chrome or any other javascript consoles when you define a variable you will get a message “undefined” You can ignore it. 

    2. In the testrpc, we need to Get a reference to our contract asynchronously since truffle executes the commands line by line make sure that the complete
      HelloWorld.deployed().then(function(deployed){hwcontract=deployed;})
      
    3. Execute the contract in the asynchronously
      hwcontract.sayHello.call()
      

      You should be able to see the Hello World in the console

      HelloWorld
      Congratulations you had successfully executed your first smart contract in testrpc.

  7. Deploy HelloWorld contract to our Private Blockchain

    To deploy this contract to our newly created blockchain, we need to update the truffle.js with the network information of the blockchain network that we had created in our previous step.
    Update the truffle.js with the following snippet.
    Replace the xxxx with the hostname of ETHEREUM-RPC-ENDPOINT that we copied in the previous step.
    Remember the host value should be just the hostname of our endpoint. So do not add the https or the port name to the host value, just add hostname.

    network_id is the id that we entered as network id when we created our blockchain

    module.exports = {
      networks: {
        development: {
          host: "localhost",
          port: 8545,
          network_id: "*" // Match any network id
        },prod: {
          host: "xxxxxx",
          port: 8545,
          network_id: "111222333"
        }
      }
    };
    
    1. We need to unlock the default account of our blockchain before we start the deployment, Repeat the steps 12, 13, 14 and 15 from Step 1 to unlock the default account
    2. Now Use the truffle with network parameter to migrate out the contract to the network.
      Go to the terminal issue and type in the following command

      $ truffle migrate --network prod
      

      This might take some time.
      Once the migration is completed, you should see a screen like this
      HelloWorld deployment success

  8. Execute your Smart contract in the real Blockchain

    We will use truffle console here also to execute our contract.
    In order to connect to a remote network we need to use use the network parameter while starting the truffle console.
    Type the following command on the terminal to open truffle console.

    $ truffle console --network prod
    
    1. Like in the previous step define a variable that will hold the reference to the contract.
      var phwcontract
      
    2. Get a reference to our contract
      HelloWorld.deployed().then(function(deployed){phwcontract=deployed;})
      
    3. Execute the contract. If you notice here instead of calling the contract asynchronously, we are directly executing the contract here.
      pcontract.sayHello()
      

      The execution on the real blockchain is going to be a lot slower than the testrpc. However, you should be able to see the response like this in your console.
      HelloWorld_p

Congratulations, you successfully created and deployed your first smart contract. This is just a beginning. We will do more exciting programs in the future.

BlockChain Fundamentals Part 1


Let’s take an example where Person A is transferring $50 to Person B.
Person A sends a request to his  Bank for initiating the transfer, Bank verifies the request and if everything is ok then subtracts $50 Person A ‘s account and adds $50 to the account of Person B. Then bank updates its ledgers to reflect these changes.

Transaction in a Centralized Ledger

If you look at any of the transaction that we do these days are mostly handled by one or more servers managed by a single entity. This entity could be your bank, a social media or an online shop, and they all do a standard action, record your transactions in a centralized database.

Centralized Ledger

You might already be thinking that these entities do not really use a single server or ledger to perform these transactions, that is true even if they have multi-geographic fault tolerant sophisticated server farms to store their ledgers they all are managed and controlled by a single entity. In other words, the entities who records these data has the full control to do any action with their data.

Person A in the above case has to trust that the bank he transacts with will act as he expects, Though this system has been working for some time, there are few drawbacks for this type of transactions and ledgers.

  1. The entity who owns the ledger has the full control over it and can manipulate the ledgers at it is on will without its customer’s permission.
  2. The records in this ledger can be easily tampered with by someone who has access to it, means if someone makes some malicious changes to this ledger that can affect everybody who is relying on that ledger. (for example, someone can hack into the bank’s centralized ledger and modify transactions).
  3. Another disadvantage is the single point of failure. For example, if the bank decides to shut down their service users will not be able to perform transactions.
    In the extreme case where a natural disaster wipes out their all data centers, all the transactions from the ledgers will be lost.
  4. Centralized Ledgers stores data in Silo’s, means, for example, your bank has its ledger, an auditor has his ledger, tax authorities have its ledgers, and they are never synchronized.
  5. Though it is a centralized ledger from the general point of view, the organizations will have to spend too much of money to build redundancy and scaling to their ledgers. That makes this very expensive

What is BlockChain

In simple terms, Blockchain is a distributed ledger of transactions. All the transactions in the blockchain are encrypted and synchronized between the participants.

Blockchain-workings-explained

Key points about Blockchain

  1. It is a Distributed Ledger.
  2. Members of a blockchain network are called Nodes.
  3. Each node has the copy of the full ledger.
  4. Nodes use Peer to Peer Network for synchronization.
  5. All information on the Distributed ledger is Secured by Cryptography.
  6. By design Eliminates the need for a Centralized Authority to validate transactions by performing peer validations before any transactions
  7. The transactions are added to ledger based on Consensus from nodes.
  8. Each valid transaction in the Blockchain is added to a Block 
  9. Blockchain miners create Block
  10. Multiple Blocks make a Blockchain.
  11. All Blocks in Blockchain are Immutable.
  12. It ensures the Complete Audit trail of the whole transactions (Verifiability)

I know this is too much to chew on, I promise you will get all these concepts by the end of this article. So be with me and let’s move on…

Distributed Ledgers  (DLT)

According to Wikipedia “A distributed ledger is a consensus of replicated, shared, and synchronized digital data geographically spread across multiple sites, countries, or institutions. There is no central administrator or centralized data storage.”Distributed Ledger

In the Distributed ledger each participating member has a copy of the ledger, In simple terms during any transactions, it updates the ledger of the sender and receiver then broadcast the transaction, The transaction details are updated in the ledgers of all of its participants using peer to peer network.

Peer to peer (P2P) network

Peer to Peer (P2P) is a decentralized communication model where each participating nodes will have the same capabilities. Unlike the client-server model, any node in the peer to peer network can send requests to other nodes and respond to requests by other nodes. The best example for a peer to peer network is BitTorrent.

Peer to peer Network

Any peer can perform a transaction on the P2P network this means When one node can perform a similar transaction at the same time it will end up in conflicts. This is called double-spend problem. Blockchain uses a consensus system to resolve these kinds of conflicts.

We will discuss the double-spend problem and consensus systems in detail in a later post. Let ‘s focus on the basic concepts first.

Cryptography in Blockchain

Due to its nature, any data in a Blockchain is visible to all the members of the network, this makes this data vulnerable and hackable, however, Blockchain uses cryptography to make all the transactions extremely safe and secure.

In simple terms, Cryptography is used for obfuscating (encrypting and decrypting) data. Blockchain leverages two cryptographic concepts in its implementation.
They are the following

  • Hashing
  • Digital Signature.

What is Hashing

Hashing is a mechanism where any input is transformed to a fixed size output using a hashing algorithm. The input could be of any file type, for example, you can generate a hash of an image, text, music, movie or a binary file.

Hashing-Representation

Whatever may be the size of the file, hashing algorithm guarantees that the output is of a fixed size. (For example, if you create an SHA 256 hash of any file the hash will always be 256 bits.)

Key properties of Hashing.

Any hashing algorithm should adhere to the following principles.

Determinism
  • For a given value the algorithm should always produce the same hash value.
For example, the SHA 256 Hash of the word "Hello World"  will be always
A591A6D40BF420404A011733CFB7B190D62C65BF0BCDA32B57B277D9AD9F146E
Pre-image resistance
  • This means it should be computationally hard or impossible to decrypt the input from the output.
In the above example, we saw the hash value
A591A6D40BF420404A011733CFB7B190D62C65BF0BCDA32B57B277D9AD9F146E
represents "Hello World". 
It is impossible to decrypt the word Hello World from the above hash value.
Second pre-image resistance
  • This means the hash generated with input should not match with a hash value of a hash value of a different input
    For example function hash(“Hello World”) != function hash(“XXX”) where “xxx” is any input other than word “Hello World”
Collision Resistance.
  • This means it should be hard or impossible to find two different input (of any size or type) to have the same hash. Collision Resistance is very similar to Second pre-image resistance.

Hashing is commonly used to find the checksum of a file. For example, when you are downloading software from a server the software vendor provides the checksum or Hash of the software package. If the hash of the downloaded software is matched with the hash given by the provider of the file, we guarantee that the software was not tampered with.

Blockchain uses Hashes to represent the current state of the blockchain. Each block in the blockchain may have hundreds of transactions and verifying each transaction individually is going to be very expensive cumbersome. So Blockchain leverages Merkle root to verify the transactions.

Merkle tree of a Block

In a Merkle tree, each non-leaf node will have the hash of their child nodes. Look at the below diagram to understand the concept of Merkle tree.

MerkleRoot

Each block in the blockchain has the Merkle root of its transactions and the hash of its previous block. The hash of Merkle root can be used as a definitive mechanism to verify the integrity of the block as even the slightest changes to any of the records in this tree will alter the value of the original Merkle Root.

Block Structure in BlockChain

In other words, the entire state of a Blockchain system can be validated by the hash of its last block which is of 256 bits.

What is a Digital Signature

A classical example of Digital Signature is the website traffic using HTTPS protocol using SSL. SSL uses the digital signatures to ensure the authenticity of the server.

A User generates a digital signature by generating a Public and Private key Pair.

Generated Key

A Public key an a Private key has mathematical relations that tie each other. The private key should be kept as secret and should be used for signing messages digitally. A public key is intended to be distributed publically which should be used by the message recipient to validate the authenticity of the message.

Sending a Message with Digital Signature

Sender signs all his transactions with his generated private key. This will ensure that only the owner of the account with the private key can do the transaction.

Verifiying a message with Digital Signature

The Reciever or any nodes in the Blockchain verifies the transaction by checking the digital signature of the transaction using the public key.

Key Points to Remember about Hashing in Blockchain

  • Hashing is used for verifying the integrity of the transaction
  • The digital signature is used for verifying the identity of the performer of a transaction. 

For learning more about the Cryptography and Hashing in Blockchain, please visit the following links

https://blockgeeks.com/guides/cryptocurrencies-cryptography/

https://blockgeeks.com/guides/what-is-hashing/


What is a Block

We saw that Blockchain is a group of blocks in sequential order, Let’s take a quick peek at what a block is? In simple terms Block is a group of valid and verified transactions. Each block in the blockchain is immutable. Block miners will continuously process new transactions, and new blocks will be added to end of the chain. Each block will have the Hash of the previous block thus ensuring the integrity of the chain.

Block Structure in BlockChain

Every block in a blockchain will have the following information

  • Hash of the previous Block
  • Timestamp at which the block was created
  • List of the transactions that were part of that Block
  • Merkle tree of all the transactions in that block
  • Nonce – (A random String generated by miner)
  • Hash of header of the block which will be used as the Hash of the previous block in the next block

Let’s take a look at a real example of a Block; As you all know Bitcoin is based on Blockchain, To explain a Block I am referring to a block from bitcoin transaction.  I had taken a real Block from Bitcoin Blockchain network from  www.blockchain.info

You can see that this block has a header and a list of transactions. Transactions and its details can vary depending upon the blockchain implementation since this is a Bitcoin block you will see Bitcoin transactions.

Bitcoin Block Sample

Let’s go through each some of the key datasets in this block header

Field name Summary
Block Id Unique ID representing this block
Number of Transactions Total number of transactions recorded in that block
Height Total number of blocks preceding to this block on that blockchain, (in this case, there are 505234 blocks created before this block)
Timestamp Time at which this block was created
Relayed By Miner who mined this block
Transactions This shows the hash of each transaction with its details

Genesis Block

We learned that any Block will always have the hash of its previous block attached to it. So its implied that First block (Block 0) of a Blockchain will not have a previous block, and this is known as Genesis Block. Genesis block is almost hardcoded into the applications that use that blockchain.

Click here to see the Genesis Block of Bitcoin

Block time

Block time is the time taken to mine a block. Block time varies from implementation to implementation in the blockchain. To provide security and prevent forking each implementation defines its own block time. For Bitcoin, the block time is 10 mins whereas for Etherium its around 20 seconds.

What is in a transaction

Again I am taking a bitcoin transaction to explain a transaction record in Blockchain.  The below transaction shows the transfer of a bitcoin from one address to two recipients which was part of the Block that you had seen in the previous image.
The Tree diagram below shows the related transfers of those bitcoins.  This related or audit trail of the asset (Bitcoin in this example) ensures the legitimacy of this transaction.

Bitcoin Transaction

What is Block Mining

In simple words, Miners are the one who runs a specialized version of the Blockchain software which can add a Block to the Blockchain. The miners get rewarded each time when they add a block to the Blockchain. There may be multiple miners who will compete with each other to create a Block in the blockchain in a network.

Blockchain Network with Nodes and Miners

Miners keep transaction pool of unconfirmed transactions tries to wrap around them to create a block. They will have to solve a mathematical puzzle to add the block to the blockchain. These mathematical puzzles are to create a hash of the block of a  particular nature.

The constants in this puzzle are the following

  • Previous Block Hash
  • Time Stamp
  • Merkle root

The variable in this puzzle is the Nonce.  A nonce is a fixed size string that can include both numerical and characters.

Miners keep trying new nonce till they solve this puzzle and the first miner who can solve this wins and broadcast to the network and the block is added to the chain. Then the nodes add this block to their ledger.

Currently, Bitcoin and Etherium use an algorithm called proof-of-work to mine a block,

Below is the illustration of the proof-of-work algorithm. 

Proof of work - Bitcoin Implementation

There are multiple algorithms used by blockchain network for mining and consensus,  We will discuss those algorithms in detail in a different post.

Few Usages of Blockchain

The blockchain is an emerging technology and there are a ton of use cases that we can solve with this.  Here are the few use cases that can leverage the power of Blockchain.

  • Health care records sharing
    Privacy of Personal health records is a major concern right now, think about a system where your personal health records can be stored in a blockchain and shared with the doctors within seconds.
  • Insurance Claim Processing
    We know that Insurance industry is prone to lots of fraudulent claims and fragmented sources of data.  Chances of error (intentional or unintentional) are very high. With the Blockchain, we can have more transparent and error-free insurance claims.
  • Payments and Banking
    The Major issue in the banking and payment sectors are fraud and money laundering. With the transparency Blockchain provides we should be able to eliminate most of these.
  • Voting systems
    We saw earlier that Blockchain is tamper proof or tamper evident. Implementing a blockchain based voting system can create an unhackable voting system.
  • Smart Contracts
    Smart contracts allow the self-execution of contracts. (I will cover smart contracts with examples in a different post). Blockchain not only eliminates the need for third-party for smart contract enforcement but also enforces the terms of a contract when the terms are met.

If you look at these use cases the blockchain will be a great choice if we auto validate the transactions using smart devices. In other sense, in order to leverage the full potential of Blockchain, we need to have more IoT sensors which can validate many of these “transactions”.

Food for thought

Imagine you go to a grocery store and pick up a bottle of organic, you are not sure whether this milk is really organic or not, you take your smartphone and scan the QR code of the batch id of the milk, The application lists out the details of dairy farm along with the cattle food they are buying, health history of the all the cows in the farm along with their medical records where you can trace back to each and every detail of what the farm states about the milk… that is the future that Blockchain  can offer.

Implementations

If you look at the Gartner Hype Cycle for emerging technologies 2017 you can see that Blockchain is slowly moving to Trough of Disillusionment and they are expecting this to mature to mainstream adaptation in 5 to 10 years. My feeling is this could be shorter.

Emerging Technology Hype Cycle for 2017_Infographic_R6A
Picture Courtesy – Gartner.com

Having said that, there are a ton of development going on in the Blockchain field, Both large and small scale players are bringing lots of innovation to the blockchain technology.

Below are few of  the current major implementations of blockchain

Bitcoin

I don’t think Bitcoin needs an introduction, It is the first digital cryptocurrency and leverages the blockchain technology.

Ethereum

Etherium is an open-source blockchain platform for Blockchain applications and smart contracts. It is the first majorly accepted Blockchain platform. We will cover more on Ethereum in a different post.

We covered the basics of Blockchain here, This is just a beginning. Stay tuned more articles where we will get to more depth on some of the topics that we discussed here.

Thanks for reading and Please leave your feedback in the comments section.

Exploring MongoDB Stitch… Backend as Service !!!


In my long software development career, I always felt that the most significant time taking task is not building the actual business logic, but the amount of code that you need for your basic housekeeping tasks or in other words the “basic chores” of an application developer.

Initially, these chores included session management, memory management, thread management, etc. With the introduction of boilerplate codes and frameworks, etc. these chores have been drastically reduced. Thus making and the development teams to focus more on the core business functionality.

However, most of the team end up re-inventing the wheel by writing these lots of essential features over and over again such as user Authentication, sending notifications to your customers, etc.

If the above mentioned is the story of Enterprises, the story is a lot worse for startups, The Problem of unwanted chores is a big predicament for them, as most of the time, they are starting the scratches.

If you look at the Basic chores, they include the following.

  • Authentication
  • CRUD of Data
  • Fine-Grained data access
  • Integrations with other services

Now looking at the overall cost of these chores it is not just the development time and effort, but the increased the code complexity, Testing efforts, etc.

MongoDB Stitch

MongoStitch is meant to address precisely these problems. In the last Annual Developer conference, MongoDB introduced MongoDB stitch as an addition Mongo Atlas, their Cloud-based “database-as-service.”

Though Atlas is available on most of the platforms, MongoStitch is currently supported only on AWS US East 1 region and its tailored as an add-on to the existing  MongoAtlast subscription.

Mongostitch allow you to create an application from your Atlas console, configure it by enabling the following

  • Add new features to your existing application
  • Control the access to data for user
  • Integrate with other services

Once you set up your MongoStitch application on the Console, you can use create its client application and start calling the MongoStitch functionalities from your application using the stitch client.

Currently, Stich clients are available for the following platforms

  • Browser and Node (JavaScript)
  • Android Application (java)
  • iOS Application (Swift)

Mongo had done an excellent job of providing detailed documentation, and you can use the getting started guide to build  sample applications

Though MongoDB Stitch is still in Beta, the features it offers looks very promising. Let us explore few of these features.

Collection/ Field-level permissions

MongoDBStitch allows the developer to specify the access for the collections.  These access rule could be defined either for the collection itself or can be specific to each field in the collection, Though you can specify these rules be aware that they will be overridden by the access that you would have provided at the MongoDB level

Stitch Admin Console 2018-01-04 19-59-16

Service Integrations

This is my favorite part, Integrating with other services are a breeze. As of Jan 2018, the following service integrations are supported by Stitch

Service Name Supported Services
S3 Upload file to S3, generate signed URL
Amazon SES Send Email
Github Webhooks
HTTP Services Basic HTTP calls (get, post, delete, put, patch, head)
GCM Push notifications to Apple and Android devices
Twilio Send and Receive Text Messages

Each of these service calls can be configured to have its own rules

Stitch Admin Console Service Integration

Authentication Services

User Authentication services using the following

  • E-mail and inline password
  • Google
  • Facebook
  • API Keys
  • Custom/Third party Authentication.

These integrations are super easy, and I was able to create a google and facebook login integration for my sample app within 15 minutes.

Stitch Admin Console Authentication

Values (Constants)

These are named constants that you can use in Stitch functions and rules.

Stitch Admin Console Values

Functions

The Functions in MongoDB stitch is written in Javascript and can be edited and tested using built-in function editors.

As of now, the ECMAS version 6 is not supported in functions.

Stitch Admin Console 2018-01-06 12-32-24


After playing around with MongoStich for a few days, I feel that this has lots of potentials and it can definitely improve productivity and helps you to focus on the core business logic.

Final Words

During my POC I wanted to extend my logged in users with additional attributes say for example I wanted to capture the address and phone numbers of my user,  however, MongoStich is saving the users in a different mechanism that is not really extendible and not visible as a collection.

Stitch Admin Console users

If Mongostitch allows the user information to be saved to a table that can be extended with any extended attributes that I wanted to add to that users it will make the life more easier for developers.

Here are some useful links on MongoDBStitch

Documentation: https://docs.mongodb.com/stitch/

Tutorials and Getting Started Guide: https://docs.mongodb.com/stitch/getting-started/

Let me know your thoughts …

Setting up Cloudwatch for Custom logs in AWS Elastic Beanstalk


Amazon Cloudwatch monitoring services are very handy to gain insight into your application metrics,  besides metrics and alarms you can use this to go through your application logs without logging into your server and tail the logs.

I ran into few issues when I was initially setting up Cloudwatch for my custom logs in the Elastic Beanstalk  Tomcat Application.  I will walk you through the whole process on this blog.

Setting up your application

In this example, I am using a Spring boot Application which will be deployed in ElasticBeanstalk Tomcat container.

.ebextension file

First, you need to create a .ebextention file for your application
Here is a working sample of the .ebextension file


files:
"/etc/awslogs/config/mycustom.conf" :
mode: "060606"
owner: root
group: root
content: |
[/var/log/tomcat8/mycustomlog.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/log/tomcat8/mycustomlog.log"]]}`
log_stream_name = {instance_id}
file =/var/log/tomcat8/mycustomlog.log*

The above configuration will create a custom configuration to copy logs from /var/log/tomcat8/mycustomlog.log to a log group named for my application and will copy over all the logs with the pattern mycustomlog.log

This line creates a configuration file mycustom.conf in the /etc/awslogs/config/mycustom.conf location. Once deployed you can SSH to this location to view your configuration.


files:
"/etc/awslogs/config/mycustom.conf" :

The following lines create the log groups and create the scripts to copy over the files to cloudwatch


content: |
[/var/log/tomcat8/mycustomlog.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/log/tomcat8/mycustomlog.log"]]}`
log_stream_name = {instance_id}
file =/var/log/tomcat8/mycustomlog.log*

Make sure that you check your .ebextension is a valid yaml before deploying this to your application environment.  I use http://www.yamllint.com/ to check the validity of my YAML’s 

Place your .ebextension file in the /src/main/resources/ebextensions/ folder of your project

Screenshot1

Gradle Script

Now you need to update your Gradle scripts to make sure that you package your .ebextnsion file along with your war file

Update your Gradle Script to include the ebextension in the root of the file


war {
       from('src/main/resources/ebextensions') {
       into " .ebextensions";
   }
}

With this gradle script, your war file should have a .ebextensions folder in the root and should have the mycustom.conf file in it.

Now let’s prepare your Elastic Beanstalk to enable the cloudwatch

Prepping up your Elastic Beanstalk  Environment

To enable Cloudwatch for Elastic Beanstalk you need the following

  1. Permission for Elastic Beanstalk to create log group and log stream
  2. Enable the Cloudwatch on the Elastic Beanstalk application

Login to your AWS Account, go to IAM and create a new Policy  similar to the following

Grant Permission to Elastic Beanstalk

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "CloudWatchLogsAccess",
            "Action": [
                "logs:CreateExportTask",
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:DescribeDestinations",
                "logs:DescribeExportTasks",
                "logs:DescribeLogGroups",
                "logs:FilterLogEvents",
                "logs:PutDestination",
                "logs:PutDestinationPolicy",
                "logs:PutLogEvents",
                "logs:PutMetricFilter"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:logs:*:*:log-group:*"
            ]
        }
    ]
}

Now attach this policy to “aws-elasticbeanstalk-ec2-role”

Enable CloudStream on your Elastic Beanstalk application

Go to your Elastic Beanstalk Application, Edit Software Configuration in the Configuration Menu

Configuration 2017-12-10 16-21-55

Enable Cloudwatch Logs from the settings

Configuration 1 2017-12-10 16-21-55

Once you do this the AWS will re-configure the system, now you deploy the war file created from the Gradle script.

Usually, AWS picks up the configuration after you deploy the new war file. if not restart the environment.

Go to the cloudwatch to verify your log stream

Troubleshooting Tips

As I said before I had issues while I was setting this up, if your configurations are not getting picked up go ahead with the following steps to  troubleshoot this issue

  • Make sure that your YAML is valid.
  • SSH into the Environment and make sure that the file created in the location /etc/awslogs/config/mycustom.conf is valid.
  • Check eb-publish-logs.log to see if it has any errors
  • Finally, if nothing works rebuild your environment.

Introducing Java Code Generator 1.0 A Utility to generate Java Beans from Documentum Objects


Java Code Generator generates Java classes from Documentum Object types. Few bullet points about what this utility does

  • Generates Java Classes from the Documentum Object types
  • All non-inherited Attributes will be member variables of the Generated Java Class
  • An array of the Object type for Repeated Attributes.
  • Class name by default will be capitalized name of the underlying Documentum object type
  • Option to prefix and suffix class name
  • Option to specify the Package name
  • Supports DFS Annotation

After a couple of Beta versions finally, I am glad to announce The Java code Generator. Thanks a lot for all who tried this and send the valuable feedbacks to me. I tried to incorporate most of the suggestions and fix many of the bugs in this version

 I have added a new DFC version of this tool to the download page.

Click here to Go to Downloads page

Service Based Objects (SBO’s) in Documentum


Documentum Business Object Framework which was introduced from Documentum 5.3 plays a key role in most of the current Documentum implementations.  Service-based Object is one of the basic members of Documentum BOF family.  Let’s try to see what makes Service Based Objects very popular and how can you implement it.

What is an SBO

In simple terms, SBO in Documentum can be compared to session beans of the J2EE environment.  SBO enable the developers to concentrate just on the business logic, and all the other aspects will be managed for you by the server. This reduces the application code significantly and reduces lots of complexities. The most significant advantage of a BOF that it’s deployed in a central repository. The repository maintains this module and DFC ensures that the latest version of the code is delivered to the client automatically.

Service-Based Objects are repository and object type in-depended that means the Same SBO can be used by multiple Documentum repositories and can It can retrieve and do operations on different object types. SBO’s can also access external resources, for example, a Mail server or an LDAP server. Before the introduction of Documentum Foundation Services, SBO’s were commonly used exposed to expose Documentum web services.

An SBO can call another SBO or by any Type based Objects. (Type Based Objects (TBO) are a different kind of Business Object types which I will explain in a separate study note)

A very simple to understand example for an SBO implementation would be a Zip code Validator. Multiple object types might have Zip code across multiple repositories.  So if this functionality is exposed as an SBO, it can be used by the custom application irrespective of Object types and repositories. This Validator SBO can be used even by different TBO’s for validations.

Here are some bullet points about SBO’s for easy remembering

  • SBO’s are part of Documentum Business Object framework
  • SBO’s are not associated with any repositories
  • SBO’s are not associated with any Documentum object types.
  • SBO information is stored in repositories designated as Global Registry.
  • SBO’s are stored in /System/Modules/SBO/<sbo_name> folder of repository. <sbo_name> is the name of SBO.
  • Each folder in /System/Modules/SBO/ corresponds to an individual SBO

How to implement an SBO using Composer

The steps to create an SBO are these.

1) Create an interface that extends IDfService define your business method
2) Create the implementation class implement write your business logic, This class should extend DfService and implement the interface defined in Step 1
3) Create a jar file for the created Interface and another jar for the implementation class then create Jar Definitions
4) Create an SBO Module and Deploy your Documentum Archive using Documentum Composer (Application builder for older versions)

Let’s see these steps with an Example SBO Zip Code Setter, I am not covering the steps using application builder here. The screenshots and the notes will give you an insight into how to use Documentum Composer to implement a Service Based Object in Documentum version 6 or above.

Step 1: Create an interface and define your Business method

The first step is to create an interface which will define the business functionality. This interface should extend IDfService interface. The client application will use this interface to instantiate the SBO.

Click New –> Interface in Documentum Composer. Click on the Add button of Extended Interfaces and search for IDfService. Select IDfService and click OK

image

Now Add the Business method ValidateZipCode() to an interface. The code should look like the following.

package com.ajithp.studynotes.sbo;

import com.documentum.fc.client.IDfService;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfException;

public interface IZipValidatorSBO extends IDfService {

public void validateZipCode (IDfSysObject obj, String zipCode, String repository)throws DfException;
}
Step 2: Create the implementation class

All the Service Based Object implementation classes should extend from DfService class and implement the Interface created in the first step.  DfService class is an abstract class There are few methods which were abstract in 5.3 and has provided with a default implementation in 6.0 and later

Method Name Returns More information
getVendorString() String This method’s default implementation returns a empty String. Override to make changes to it.
getVersion() String This method returns a version which is not right, Override this method to return Major.minor version.
isCompatible() boolean The default implementation returns true if the version is an exact match

Let’s see some other important methods of DfService Class before we move further.

Method Name Returns More information
getName() String This returns the fully qualified logical name of the service interface
getSession() IDfSession This method returns IDfsession Object for the docbase name which is passed as argument to this method. You have to make sure that you call releaseSession() after you are done with the operation that involves session.
releaseSession() Releases the handle to the session reference passed to this method.
getSessionManager() IDfSessionManager Returns the session manager.

Managing repository sessions in SBO As We saw the previous table its always good practice to release the repository session as soon as you are done with its use. So the ideal way to do this should be like this.

// Get the session 
IDfSession session = getSession(repoNam);
try {
// do the operation with session
} catch (Exception e){
// Process the exception 
}finally {
// release the session 
releaseSession(session)
}

Transactions in SBO

Another important thing is to know is how to handle transactions in SBO. Note that only session manager transactions can be used in an SBO. The system will throw an Exception when a session based transaction used within an SBO.

beginTransaction() will start a new Transaction and use commitTransaction() to commit it or abortTransaction() to abort a transaction.  Always ensure that you are not beginning a transaction where another transaction is active. You can use isTransactionActive() to find out whether a transaction is active or not.

Another important point is if your SBO doesn’t start a transaction don’t commit it or abort it in the SBO Code instead if you want to abort the transaction use setTransactionRollbackOnly() method.

Other important points

1) Since SBO’s are repository independent, do not hardcode the repository names in the methods. Either pass the repository name as a method parameter or have it as a variable in SBO and use a setter method to populate it after instantiating

2) Always try to make SBO’s stateless (Its a pain to manage state full SBO’s ).

3) Don’t reuse SBO, Always create a new instance before an operation.

Now let’s see how to code our ZipSetterSBO

Click on New –> Class, Click on the Browse button of Superclass and Search and Select DfService and in the Interfaces search for the Interface created in the previous step and Click OK. Also, select the option Inherited Abstract Methods in Which method stubs would you like to create.

image

I had overridden method getVersion() for the illustration purpose. See the code sample for the inline comments.

package com.ajithp.studynotes.sbo.impl;

import com.ajithp.studynotes.sbo.IZipValidatorSBO;
import com.documentum.fc.client.DfService;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfException;

public class ZipValidator extends DfService implements IZipValidatorSBO {

public static final String versionString = "1.0";
// overriding the default 
public String getVersion() {
        return versionString ;
      }

public void validateZipCode (IDfSysObject obj, String zipCode, String repository) throws DfException {
     IDfSession session = getSession(repository);
     try {
     if (isValidUSZipcode(zipCode)){
         obj.setString("zipcode",zipCode);
         obj.save();
      }
     } catch (Exception e){
         /* Assuming that transaction is handled outside the code and this says DFC to abort the transaction 
         in case of any error */
        getSessionManager().setTransactionRollbackOnly();
        throw new DfException();
     } finally {
     releaseSession(session);
    }
  }
 private boolean isValidUSZipcode(String zipCode){
     // implement your logic to validate zipcode. 
     // or even call a external webservice to do that 
     // returning true for all zip codes
      return true;
   }
}
Step 3: Generate Jar files and Create Jar Definitions

The next step in SBO creation is to create Jar files which will hold the interface and the implementation classes. These jar files are required to deploy your SBO.

Use Composers/Eclipse Create Jar option or command line jar command to create the jar file

image image

image

Selecting the sbo package to create the interface jar

image

Selecting the com.ajithp.studynotes.sbo.impl for implementation.

Look at the Composers Export Jar screenshots for Interface and implementation (Refer Eclipse Documentation for more details). I think the figures posted above are self-explanatory.

The Command line to create a Jar file is jar cf <name_of_jar>, Please look at the Java Documentation for more details on switches and options of Jar command.

The creation of Jar Definitions is new step added in Composer.

1) In Composer change the perspective to Documentum Artifacts Click New –> Other –> Documentum Artifacts –> Jar Definition

image

2) Click Next  and Enter the name of for the Jar Definition and click Finishimage

3) Select Type as Interface if the jar has only interface, Implementation if the jar has the only implementation of interface or Interface and Implementation if the single jar file has both interface and implementation. Click on the Browse button and browse to the jar created in the last step.

In Our case create two Jar Definitions The first one with type as Interface pointing to Jar Created for SBO and a second one with type Implementation pointing to the implementation jar

untitled

Name the Interface jar def as zipcodevalidator and the implementation jardef as zipcodevalidatorimpl

Step 4: Create a Module and Deploy the SBO

In Composer change the perspective to Documentum Artifacts then Click New –> Other –> Documentum Artifacts –> Module

image

Give a valid name and leave the default folder and Click Finishimage

In the Module, edit window select SBO from the dropdown

image

Now Click on Add Section of Implementation Jars of Core Jars. A new pop up window will appear which will have a list of all the Jar definitions set to Type Implementation and Interface and Implementation. Select the one you wanted to use for ZipCodeValidatorSBO that is ZipCodeValidatorImpl.

image

Click on the Select Button near pointing to Class name and Select the implementation class. In this case ZipValidator

image

Now Click on Add Section of Interface Jars of Core Jars. A new pop up window will appear which will have a list of all the Jar definitions set to Type Interfaces and Interface and Implementation. Select the one you wanted to use for ZipCodeValidatorSBO that is ZipCodeValidator.

image

For more details of other options refer to Documentum Composer Manual. Save the Module.

Now right click on the project and install the Documentum project

image

Click on the Login button after logged in Click on Finish to start the installation.

image

 

Look at the Documentum composer documentation to know more about the Installation options.

How to use SBO from a Client Application

follow the below steps to instantiate an SBO from a client application.

1) Get the Local client

2) Create login info and populate the login credentials.

3) Create an IDfSessionManager object

4) Use the newService () from the Client Object to create an SBO instance

// create client
  IDfClient myClient = DfClient.getLocalClient();
  // create login info
  IDfLoginInfo myLoginInfo = new DfLoginInfo();
  myLoginInfo.setUser("user");
  myLoginInfo.setPassword("pwd");
  // create session manager
  IDfSessionManager mySessionManager = myClient.newSessionManager();
  mySessionManager.setIdentity("repositoryName", myLoginInfo);
  // instantiate the SBO
  IZipValidatorSBO zipValidator = (IZipValidatorSBO) myClient.newService( IZipValidatorSBO.class.getName(), mySessionManager);
  // call the SBO service
  zipValidator.validateZipCode(obj, zipCode, "repositoryName");

Download this Study Note (PDF)

Using Java reflection to reduce Code and Development time in DFS


 

Java reflections are one of the most powerful API’s of Java Language, this can be used to reduce code significantly.

Most of the Current Enterprise application consists of different layers and they use Value objects to transfer data from one layer to another. An inefficient way of using getters and setters of the attributes of Value objects can increase code and development time of application. Effective use of reflection can reduce code and development time significantly.

So let’s take a Scenario,  I have an Object type MyObjectType extending from dm_document with 50 additional attributes, so dm_document as of Documentum 6.5 has 86 attributes adding additional 50 attributes that means we have 139 attributes for this object type. Consider a standard Web Application using DFS behind which needs to manipulate (add or edit) instances of this object type, The Service needs to add all these attributes to the PropertySet  of the DataObject representing that instance. Then need to call the appropriate service.

 

Considering that the bean instance name of MyObjectType is myObjectBean the Standard code will  be something like this

  ObjectIdentity objIdentity = new ObjectIdentity("myRepository");
  DataObject dataObject = new DataObject(objIdentity, "dm_document");
  PropertySet properties = dataObject.getProperties();
  properties.set("object_name", myObjectBean.getObject_Name());
  properties.set("title", myObjectBean.getTitle()); 
  // omited for simplicity


  objectService.create(new DataPackage(dataObject), operationOptions);

 

In the above code you have to explicitly set individual attributes for the object, the more the number of attributes the more complex and messy code.

Take another Example, where you have to retrieve an Object information and pass it over to the UI layer.

 myObjectBean.setObject_name(properties.get("object_name").getValueAsString());
 myObjectBean.setTitle(properties.get("title").getValueAsString());
 myObjectBean.setMy_Custom_Property(properties.get("my_custom_property").getValueAsString());

This operation can be more complex if you decide to use match the Data Type of your bean with the Object type.

 

So what is the best approach to reducing this complexity? the answer is the effective use of reflection API.

Let’s take a step to step approach to handle this issue.

To understand this better consider the below as the attributes of mycustomobjecttype

 

Attribute Name Attribute Type
first_name String
last_name String
age integer
date_purchased time
amount_due double
local_buyer boolean

 

Java Bean

Create a Java Bean that matches the Object Type

 public class Mycustomobjecttype {
  protected String first_name ;
  protected String last_name  ;
  protected int age;
  protected Date date_purchased  ;
  protected double amount_due  ;
  protected boolean local_buyer ;
  public int getAge() {
    return age;
  }
  public void setAge(int age) {
    this.age = age;
  }
  public double getAmount_due() {
    return amount_due;
  }
  public void setAmount_due(double amount_due) {
    this.amount_due = amount_due;
  }
  public Date getDate_purchased() {
    return date_purchased;
  }
  public void setDate_purchased(Date date_purchased) {
    this.date_purchased = date_purchased;
  }
  public String getFirst_name() {
    return first_name;
  }
  public void setFirst_name(String first_name) {
    this.first_name = first_name;
  }
  public String getLast_name() {
    return last_name;
  }
  public void setLast_name(String last_name) {
    this.last_name = last_name;
  }
  public boolean isLocal_buyer() {
    return local_buyer;
  }
  public void setLocal_buyer(boolean local_buyer) {
    this.local_buyer = local_buyer;
  }
}

Getting the Values from PropertySet (Loading Java Bean)

……

List<DataObject> dataObjectList = dataPackage.getDataObjects();
DataObject dObject = dataObjectList.get(0);
Mycustomobjecttype myCustomObject = new Mycustomobjecttype();
populateBeanFromPropertySet(dObject.getProperties(),myCustomObject);

……

// See the Reflection in Action here 
public void populateBeanFromPropertySet(PropertySet propertySet, Object bean)
  throws Exception {
 BeanInfo beaninformation;
 beaninformation = Introspector.getBeanInfo(bean.getClass());
 PropertyDescriptor[] sourceDescriptors = beaninformation.getPropertyDescriptors();
 for (PropertyDescriptor descriptor : sourceDescriptors) {
     Object result = null;
     String name = descriptor.getName();
    if (!name.equals("class")) {
      if (propertySet.get(name) != null) {
        if (descriptor.getPropertyType().getName().equals("int")) {
          result = new Integer(propertySet.get(name)
              .getValueAsString());
        } else if (descriptor.getPropertyType().getName().equals("double")) {
          result = new Double(propertySet.get(name).getValueAsString());
         } else if (descriptor.getPropertyType().getName().equals("boolean")) {
          result = new Boolean(propertySet.get(name).getValueAsString());
         } else if (descriptor.getPropertyType().getName().equals("java.util.Date")) {
          DateProperty dat = (DateProperty)propertySet.get(name);
          result = dat.getValue();
        }else {
          // none of the other possible types, so assume it as String
          result = propertySet.get(name).getValueAsString();
        }
        if (result != null)
          descriptor.getWriteMethod().invoke(bean, result);
      }
     }
  }
}

Setting Values to Property Set

 

public DataPackage createContentLessObject(Mycustomobjecttype myCustomType) throws Exception {
ObjectIdentity objectIdentity = new ObjectIdentity("testRepositoryName");
DataObject dataObject = new DataObject(objectIdentity, myCustomType.getClass().getName());
PropertySet properties = populateProperties(myCustomType);
properties.set("object_name",myCustomType.getFirst_name()+myCustomType.getLast_name() );
dataObject.setProperties(properties);
DataPackage dataPackage = new DataPackage(dataObject);
OperationOptions operationOptions = new OperationOptions();
return objectService.create(dataPackage, operationOptions);
}

 

// Reflection in Action  
public PropertySet populateProperties(Object bean)throws Exception {
BeanInfo beaninfo;
PropertySet myPropertyset = new PropertySet();
beaninfo = Introspector.getBeanInfo(bean.getClass());  
PropertyDescriptor[] sourceDescriptors = beaninfo
      .getPropertyDescriptors();
  for (PropertyDescriptor descriptor : sourceDescriptors) {
    String propertyName = descriptor.getName();
    if (!propertyName.equals("class")) {
        // dont set read only attributes if any
       // example r_object_id 
       if (!propertyName.startsWith("r")) {
        Object value = descriptor.getReadMethod().invoke(bean);
       if (value != null) {
          myPropertyset.set(propertyName, value);
        }
      }
   }
 }
  return myPropertyset;
}

Chaining of Custom Services in DFS


 

There is an interesting drawback in Documentum Foundation Services Version 6.5,

Issue:

When you chain custom services and try to build the Services the build fails lets see a Scenario from the DFS sample code itself

@DfsPojoService(targetNamespace = http://common.samples.services.emc.com&#8221;, requiresAuthentication = true

) public class HelloWorldService

{

public String sayHello(String name)

{

ServiceFactory serviceFactory = ServiceFactory.getInstance();

IServiceContext context = ContextFactory.getInstance().getContext();

try {

IAcmeCustomService secondService = serviceFactory.getService(IAcmeCustomService.class, context);

secondService.testExceptionHandling();

} catch (ServiceInvocationException e) {

e.printStackTrace();

} catch (CustomException e) {

e.printStackTrace();

} catch (ServiceException e) {

e.printStackTrace();

}

return “Hello “ + name;

}

}

Here in the sample code of DFS I am chaining the services, Here everything looks fine and when you now you build this service during the genarateArtifacts ant task the Build will fail with a will get a ClassNotFound compiler error at

IAcmeCustomService secondService = serviceFactory.getService(IAcmeCustomService.class, context);

What happens here is when the build does the initial clean up all the generated Client interfaces are deleted and DFS currently not checking for any dependencies.

Let me take the example of dfs-build.xml that’s the part of CoreDocumentumProject in composer

<generateArtifacts serviceModel=“${gen.src.dir}/${context.root}-${module.name}-service-model.xml” destdir=“${gen.src.dir}/”>

<src location=“${src.dir}” />

<classpath>

<path refid=“projectclasspath.path” />

</classpath>

</generateArtifacts>

</target>

 

In this we cannot set any exclusion path in <src location=“${src.dir}” />

Simply because it even if you provide <fileset/> or <direst/> with pattern set its not recognizing it.

I had raised a support case with EMC and they told me that this is not currently supported!!!! And they will add this as a feature request

This means we cannot Chain Custom Services unless EMC fix this or we do a semi manual workaround to overcome this issue.

The Work-around that I found

Follow these steps to overcome this issue

Step 1,

Identify the Services those will call the custom services, and create a new source directory for it in composer, here I am calling them as depended_src and move the services that calls the custom services to there, the depended src should be in a separate path than the webservices- src

src-img1

Step 2

1) Now Edit the Build file and add these two properties

 

<property name=“my.core.services.classes” value=“${service.projectdir}/Web Services/bin/classes” />

 

<property name=“dep.src.dir” value=“${service.projectdir}/depended_src” />

The dep.src.dir should point to the depended src location mentioned in step 1

2) Create an additional target for generatemodel and generate artifacts

<target name=“generateDependencies” depends=“generate”>

<echo message=“Calling generateDependencies” />

<generateModel contextRoot=“${context.root}” moduleName=“${module.name}” destdir=“${gen.src.dir}/”>

<services>

<fileset dir=“${dep.src.dir}”>

<include name=“**/*.java” />

</fileset>

</services>

<classpath>

<pathelement location=“${my.core.services.classes}” />

<path refid=“projectclasspath.path” />

</classpath>

</generateModel>

<generateArtifacts serviceModel=“${gen.src.dir}/${context.root}-${module.name}-service-model.xml” destdir=“${gen.src.dir}/”>

<src location=“${dep.src.dir}” />

<classpath>

<pathelement location=“${my.core.services.classes}”/>

<path refid=“projectclasspath.path” />

</classpath>

</generateArtifacts>

<!– signal build is done –>

<!– used by DFSBuilder.java –>

<copy todir=“${src.dir}/../” file=“${basedir}/dfs-builddone.flag” />

</target>

3) Now edit dfs-build.properteis and add the following property

service.projectdir= <absolute path to the project>

Step 3

1) Run the generate task,

2) Copy all the service entries from (between <module> and </module><context-root>-<module-name>-service-model.xml you can find this in <project_dir>\Web Services\bin\gen-src folder

3) Now run the generateDependencies task that was created on Step 2

4) Now Edit <context-root>-<module-name>-service-model.xml and add the copied services to this file

5) If you want to create the jar files now you can call the package task after this.

This should help you to chain custom services , and if you found any alternate ways please comment.