Unlocking the Future: The Dawn of Artificial General Intelligence?


Imagine a world where machines not only understand our words, but grasp the nuances of our emotions, anticipate our needs, and even surpass our own intelligence. This is the dream, and perhaps the near reality, of Artificial General Intelligence (AGI).

For many years, the idea of achieving AGI (Artificial General Intelligence) has only existed in the realm of science fiction. It’s been seen as a futuristic utopia where machines can seamlessly integrate into our lives. However, this perception is changing. Advances in AI technology are blurring the lines between fiction and reality, leading to both excitement and apprehension regarding its potential impact on society.

In this blog post, we’ll embark on a journey to explore the fascinating world of AGI. We’ll peek into the current state of AI and the significant innovations that are inching us toward AGI.

What is AGI and Why is it Significant? 

AGI is a type of artificial intelligence that enables machines to understand, learn, and apply their intelligence to solve problems with the same efficiency and effectiveness as a human being. Unlike narrow AI, which is designed to perform specific tasks with expertise (such as facial recognition, playing a game, or language translation), AGI can generalize its learning and reasoning abilities across a wide range of tasks without being pre-programmed with task-specific algorithms.

The goal of AGI is to create machines that can reason, plan, learn, communicate, and make decisions at the same level as humans. AGI has the potential to be a universal problem solver, leading to breakthroughs in fields such as medicine, climate change, space exploration, and more, where complex problem-solving capabilities are crucial.

AGI can learn from experiences and adapt to new situations without human intervention. This adaptability makes it an invaluable tool for navigating the ever-changing and complex nature of real-world environments.

AGI will work alongside humans, complementing human intelligence and capabilities in unique ways. It may enhance human decision-making, provide personalized education, and offer expert advice across disciplines, enabling a new era of human-AI collaboration.

The capabilities mentioned above of AGI imply that it should understand, learn, and apply knowledge across a wide range of tasks with a level of competence comparable to or surpassing that of a human. This encompasses not just narrow tasks, but also the breadth of cognitive tasks humans can perform. 

Current State of AI Technologies

We have made significant progress in AI over the past few years, and there are several strengths of the current AI system. 

Strengths:

1. Specialization and Efficiency in Narrow Tasks: AI systems are excellent at performing specific tasks that are well-defined. For instance, deep learning has shown outstanding success in tasks such as image and speech recognition, natural language processing (NLP), and playing complex games like Go and chess. In some cases, these systems can even outperform humans in their areas of expertise.

2. Scalability and Data Processing: Current AI systems can process and analyze massive amounts of data at an incredibly fast pace and on a much larger scale than humans can ever achieve. This makes them particularly useful in fields such as financial forecasting, data analysis, and medical diagnosis, where there is a need to process large volumes of data quickly.

3. Continuous Learning and Adaptation: Many AI systems, especially those based on machine learning, can continuously learn from new data and improve over time. This allows them to adapt to changing environments and requirements, albeit within their narrow domain of expertise.

However, to achieve true AGI, we need to overcome many of the limitations we currently face. 

Limitations:

1. Lack of Generalization: While the majority of current AI systems are highly skilled at performing tasks for which they have been trained, they struggle when it comes to applying the knowledge gained from these tasks to new and unseen tasks. This inability to generalize their knowledge is a major hurdle in achieving human-like intelligence, as it requires the ability to apply knowledge flexibly across a wide range of domains.

2. Understanding and Reasoning: Although AI has advanced significantly, it still lacks the profound understanding and reasoning capabilities that humans possess. While AI can recognize patterns in data, it often fails to comprehend the underlying causality or context, which restricts its ability to make intricate decisions or understand the complicated nuances of human languages and emotions.

3. Ethical and Social Considerations: As AI systems become more integrated into society, issues around ethics, bias, and social impact arise. Ensuring that AI systems are fair, transparent, and aligned with human values is a complex challenge that needs to be addressed.

The Pathway to AGI: Integrating AI Models and Technologies

Achieving AGI will not be possible through a single do-it-all model. Instead, it will involve integrating various AI models and technologies, leveraging their strengths while overcoming their limitations. This integration can take several forms

  • Hybrid Models: Creating hybrid models by combining different AI approaches, such as symbolic AI (which excels at reasoning and understanding complex relationships) with neural networks (which are excellent at pattern recognition), could lead to systems that both understand and learn from the world more holistically.
  • Transfer and Multitask Learning: Developing AI architectures capable of transferring knowledge between domains and performing multiple tasks with a single model is a step towards the adaptability and flexibility characteristic of AGI.
  • Enhancing Learning Efficiency: To achieve AGI, AI systems must learn from fewer examples and generalize across domains, similar to how humans can learn new concepts with limited data. Research into Self Discovering models, few-shot learning, and meta-learning is critical for this.
  • Ethical and Social Alignment: Integrating ethical reasoning and social norms into AI systems is crucial for their safe and beneficial coexistence with humans. This involves not just technical advancements but also interdisciplinary research incorporating insights from philosophy, psychology, and social sciences.

Building Blocks of AGI

1: The Foundation of AI Models

AGI relies on robust and powerful AI models to solve complex and multifaceted problems. In this section, we will explore some of the recent advancements in these models and how they are helping to achieve true AGI.

  • Mixture of Experts Architecture:  Mixture of Experts (MoE) is a neural network architecture that is composed of numerous specialized sub-networks, called ‘experts,’ each designed to handle specific types of data or tasks. In an MoE model, input is routed to only a few relevant experts. This allows for conditional computation, where parts of the network are activated based on the input, leading to a dramatic increase in model capacity without a proportional increase in computation.
    Most high-performing modern models, such as GPT4, Mistral, and Gemini 1.5, leverage a Mixture of Experts Architecture. 
  • Multimodel Large Language Models:  Multimodal language models can process and integrate information from various types of data, including text, images, and audio, similar to how humans perceive and interpret the world through our multiple sensory inputs. AGI should possess the ability to understand, generate, and interpret human language just like humans do. 
    GPT4, Gemini, etc, are examples of the Multimodel Large Language model.
  • Larger Context Windows: A context window is a term used in natural language processing and machine learning to refer to the amount of textual or input data that an AI model can consider at any given time to make predictions, generate responses, or understand content. The AI’s ability to understand subtle nuances and maintain coherence over extended conversations or texts can be significantly enhanced by expanding the context window. This can improve the AI’s reasoning and decision-making capabilities by allowing it to simultaneously consider a broader range of information, leading to more informed and nuanced outcomes. The expansion of the context window facilitates deeper learning and knowledge integration, which enables the AI to detect patterns and relationships over larger spans of information. Furthermore, it broadens the applicability of AI in complex fields such as legal analysis, scientific research, and literary interpretation, where extensive background information is required to understand the content. 
    The Model LTM-1 has a context window of 5 million tokens (approximately 4000 pages). Gemini 1.5 has a context window of 1 million tokens (Approximately 800 pages) 

2: Autonomous AI Agents

AGI, can mimic human-like cognitive processes. One of the key features of AGI is the ability to operate independently and make decisions in complex environments. Autonomous agents, powered by large language models, can adapt and solve various problems without human intervention. They can understand a task, break it down into smaller sub-tasks, and execute it accordingly.

  • OpenAI’s Next Iteration of ChatGPT will be a Super Smart Personal Assistant: This agent is designed to take over people’s computers, performing various tasks autonomously. Sam Altman has reportedly described the new version of ChatGPT as a significant advancement towards creating a super-smart personal assistant for work, capable of operating systems and executing tasks based on user commands.
  • Google’s Work on AI Agents: Sundar Pichai, Google’s CEO, stated that their latest technology allows it to act more like an agent over time, indicating that Google is also focusing on developing autonomous AI agents.
  • Other Notable Autonomous Agents: The technology industry is moving towards creating AI agents capable of performing tasks with high levels of autonomy. This can be seen in innovations such as Rabbit R1 devices, Mulon, Open Interpreter, and self-operating computers.
  • Open AI Sora:  Sora, a recent model introduced by OpenAI, can build high-resolution videos from textual prompts. Though it’s not technically an autonomous AI agent, it showcases the capability of currently available models to perform complex tasks with minimal human interference. 

Interactions and Decision Making

3: Enhancing Communication with AI

Another aspect of AGI is AI conversing with humans. This conversion is crucial for feeding human communication into AI models, allowing them to process, understand, and interact with human language in its natural form. On the other hand, AGI needs to communicate with humans most naturally.  

  • From Voice to Text: The importance of voice-to-text technology in achieving AGI lies in its ability to give AI a direct connection to human speech and thought, providing a vast dataset to learn the subtleties of language, context, emotion, and intention. As AI models become more proficient at interpreting voice inputs, they come closer to achieving a level of linguistic comprehension and interaction that resembles human cognitive abilities.
    The Voice to Text model from Open AI Whisper is an Automatic speech recognition (ASR) system trained on 680,000 hours of multilingual data that can transcribe audio with different accents and background noise. It can also transcribe in multiple languages. 
  • From Text to Voice: Advancements in text-to-voice technologies that offer human-like interactions have been driven by the integration of advanced algorithms, machine learning, and artificial intelligence (AI). These technologies have significantly enhanced the capacity of text-to-speech (TTS) systems to recognize and replicate the nuances of human speech, including intonation, stress, rhythm, and emotional inflections.
    ElevenLabs is a company that specializes in advanced text-to-speech (TTS) and AI voice generation technology. Their platform provides high-quality and natural-sounding speech synthesis with a wide range of customization options. ElevenLabs’ API supports voice generation in 29 languages and offers ultra-low latency.

4: AI’s Decision-Making Capabilities

AGI requires not only the execution of tasks but also the ability to understand and adapt to complex and dynamic environments, make decisions that consider long-term outcomes and ethical implications, and integrate diverse knowledge bases.  Several recent AI models and systems have demonstrated remarkable abilities in complex decision-making and execution. 

AlphaGo and AlphaZero: DeepMind has developed AI systems that have shown remarkable decision-making abilities in complex games like Go and Chess, which are known for their vast number of potential moves. AlphaGo’s victory over world champion Lee Sedol and AlphaZero’s ability to master go games from scratch have highlighted AI’s potential to learn strategies and predict opponents’ moves.

Autonomous Vehicles: Self-driving cars are another prime example of how AI can make decisions in real-world environments. They use data from sensors and cameras to make quick decisions regarding speed, direction, and obstacle avoidance while adapting to changes in traffic and following traffic laws. This kind of decision-making involves complex algorithms that can predict the actions of other drivers and pedestrians, demonstrating a highly advanced integration of perception, prediction, and execution.

Enabling Technologies

6 Specialized AI Hardware

The Role of AI Chips and Hardware in Developing AGI

The development of AGI is not just a software challenge but also a hardware one. Specialized AI hardware, including AI chips, plays a crucial role in this journey. These chips are designed specifically to handle the enormous computational demands of AI algorithms, providing the necessary speed and efficiency that traditional hardware cannot. Currently, the focus of AI hardware development is on optimizing neural network performance, reducing energy consumption, and increasing processing capabilities. Achieving human-like cognitive abilities requires processing and analyzing data at an unprecedented scale and speed, which is where specialized AI chips come in. They enable more complex models to be trained more quickly and efficiently, facilitating advancements in learning algorithms and neural network designs that are essential for the leap from narrow AI to AGI.

Innovation in AI Hardware

Innovations in AI hardware are focused on creating chips that can perform more calculations per second while using less power, which is vital for scaling AI technologies sustainably. Moreover, the development of hardware that can support more advanced forms of memory and processing capabilities, such as neuromorphic computing, which mimics the neural structures of the human brain, is seen as a key frontier in the journey toward AGI.

Several Vendors made siginificant advancements on this area.  

NVIDIA remains the leading provider of AI chips with their latest model, which is named H100 GPU. This chip boasts a significant 18x increase in performance compared to its predecessor. NVIDIA has also introduced the Grace Hopper Supercomputing Platform, which combines H100 GPUs with their high-speed NVLink interconnect. The platform is specifically designed to handle enormous AI workloads. Additionally, NVIDIA has expanded its reach into AI networking by acquiring Barefoot Networks. This acquisition has further strengthened NVIDIA’s position as a one-stop shop for AI infrastructure.

AMD: AMD is making significant strides in the AI field with its new AI accelerator, which they claim outperforms NVIDIA’s offering for inference tasks. They’re also targeting the training and inference market with their Instinct server GPUs. However, their biggest move is partnering with Microsoft to develop a custom AI chip designed for the cloud, which could disrupt NVIDIA’s dominance in this space.

Intel, the leading chip manufacturer, has been catching up in the AI industry by introducing its latest technology, the Gaudi 3 chip, designed to compete directly with NVIDIA in data centers. They have also launched the Ponte Vecchio accelerator, which offers a high-performance computing solution. Though Intel’s CPUs were not traditionally known for their AI capabilities, their latest Meteor Lake CPUs come with integrated AI instructions that allow for efficient on-chip processing.

Google:  The popular search engine is continuously improving its Tensor Processing Units (TPUs) and has recently announced the upcoming TPUv4, which promises significant performance and efficiency improvements. In addition, Google has partnered with Samsung to manufacture future generations of TPUs, ensuring a consistent supply chain. By adopting an open-source approach, Google has made the TPU designs available to the public, providing others with the opportunity to build their own AI systems based on Google’s technology.

OpenAI CEO Sam Altman aims to raise $5 to $7 trillion to enhance the global production of AI chips. The plan is to establish a network of semiconductor fabrication plants to focus on producing GPUs, which are crucial for running complex AI models efficiently.  Altman’s project aims to increase GPU supply to address the current shortage caused by the rising demand for AI technologies. By doing so, they hope to reduce costs and make these chips more accessible for developers and researchers, ultimately accelerating AI development. Altman’s AI initiative has gained global attention and raised questions about feasibility, regulation, and geopolitical effects. Partnerships with industry players like Intel and semiconductor companies are crucial. The project highlights the strategic importance of computing in AI development and the critical need for chip supply. The AI chip market is complex, with various countries and companies vying for dominance.

Towards Achieving AGI

7: Combining All Elements

Integrating Technologies and Methodologies:

Developing AGI is a complex and multifaceted process that requires the integration of various technologies and methodologies. To achieve AGI, a cohesive strategy is needed that leverages the strengths of powerful AI models, such as a Mixture of Experts (MoE) for specialized knowledge processing and multimodal language models for enhanced human-machine interaction. Autonomous AI agents bring the necessary autonomy and adaptability to navigate complex environments and make informed decisions independently. Communication and decision-making capabilities are also crucial components in building towards AGI. The evolution of voice-to-text and text-to-voice technologies enhances AI’s ability to communicate in a human-like manner, facilitating its seamless integration into human-centric environments.

Challenges and Solutions

Integration Challenges

  • Complexity and Compatibility: One of the main challenges in AI development is the difficulty of integrating various AI technologies and ensuring compatibility across different systems and models. This complexity can result in difficulties in creating cohesive systems that can effectively leverage the strengths of each component.
  • Data and Privacy Concerns: Integrating AI technologies raises data and privacy concerns as systems process vast amounts of sensitive and personal information.
  • Ethical and Social Implications: The development of AGI raises ethical and social challenges, such as potential biases, misuse, and impact on employment and society.

Potential Solutions

  • Interdisciplinary Research and Collaboration: Dealing with the complexities of AGI demands a collaborative effort from specialists in various domains such as AI, ethics, psychology, and specific areas of expertise. Cross-disciplinary research can offer a comprehensive strategy for creating AGI, ensuring that technological progressions are in harmony with ethical concerns and social principles.
  • Open Standards and Modular Design: Developing open standards for AI technologies and adopting modular design principles can facilitate integration, allowing different components to interact seamlessly and be updated independently. 
  • Ethical Guidelines and Governance: It is of utmost importance to establish ethical guidelines and governance structures to develop AGI. This includes creating frameworks for data privacy, preventing bias, and ensuring the responsible use of AI. By doing so, we can guarantee that AGI technologies are developed and deployed to benefit society as a whole.
  • Public Engagement and Education: Engaging the public and promoting education on AGI can address societal concerns and ensure development aligns with public values.

The pursuit of AGI is one of the most ambitious goals in the field of artificial intelligence. To achieve this goal, we need to focus on a convergence of technological innovation, ethical foresight, and global collaboration. This will help us realize the full potential of AI and AGI, shaping a future where AI can work alongside humanity to address some of the world’s most pressing challenges and open up new frontiers of knowledge and possibilities.

However, is this really true? Is the Object on the mirror really closer than it appears? 

There has been considerable discussion recently surrounding OpenAI’s progress in creating AGI. Leaks and confirmations from insiders have fueled speculation that significant advancements have been made. Nevertheless, while some people assert that OpenAI may have already achieved AGI, these claims are unverified and continue to be debated. However, to further validate this claim, OpenAI’s CEO, Sam Altman, has acknowledged the possibility of AGI in the near future.

Food for thought…


OpenAI has recently released a powerful AI model called Sora, which is capable of generating high-quality videos and images. One of Sora’s remarkable abilities is to simulate various aspects of the physical world, such as people, animals, and environments. It can also simulate simple actions that affect the state of the world, such as leaving persistent strokes on a canvas or rendering video game dynamics like in Minecraft. Interestingly, Sora can simulate some aspects of the physical world without explicit biases for 3D objects.

Sora’s capabilities include generating videos with dynamic camera motion, maintaining long-range coherence and object permanence, and interpolating between different videos seamlessly. Some of the sample videos surfaced showcases that the model is aware of the fluid dynamics and physics. Although the details on how this model was trained are unclear, it is almost certain that the training data did not involve physics or fluid dynamics textbooks.

Therefore, it is possible to say that Sora did an inferred learning of physics and fluid dynamics from the videos that used to train the model???

Prompt Engineering – Unlock the Power of Generative AI


In the last couple of years, artificial intelligence (AI) has reached a new paramount by introducing generative ai and large language models (LLM), which touch almost every aspect of our lives. As Generative AI models such as GPT-3 continue to expand and evolve, one critical area that has gained increasing attention is prompt engineering.
By harnessing the power of language models, prompt engineering enables us to leverage AI systems more effectively. Prompt engineering can dramatically improve the outputs of AI models and can facilitate more meaningful human-AI collaboration.
We will cover the fundamentals of Prompt engineering, including real-world applications and its significance in today’s rapidly advancing AI landscape.

What is Prompt Engineering

Prompt engineering is intended to enhance the performance of AI systems, specifically large-scale language models (LLM), by composing effective and targeted input prompts. Language models, like GPT-3 (OpenAI), BERT (Google), and RoBERTa (FaceBook AI), are designed to generate human-like responses based on the input they receive. 
However, If you have played around with Open AI’s Chat GPT (Which uses the GPT Behind the screen), you may have noticed that it may not always produce the desired output or exhibit a deep understanding of the context.
Here, prompt engineering plays a vital role, focusing on fine-tuning the input prompts given to an LLM to achieve more accurate, relevant, and meaningful responses.  

Prompt Engineering is a process that involves the following. 

  1. Understanding the problem that you are solving
  2. Designing clear and concise prompts with explicit instructions
  3. Refining these prompts iteratively based on the generated output.

and Achieve 

  1. The maximum potential of the AI Models
  2. Improve overall performance and effectiveness of AI Models

Why Prompt engineering is important

Prompt engineering plays a very significant role in the context of Generative AI applications due to the following reasons:

  1. Context-awareness: By providing clear and concise prompts, we can help AI systems better understand the context of a task, leading to more accurate and relevant outputs.
  2. Enhanced AI performance: By composing effective prompts, AI systems can generate more accurate, relevant, and context-aware responses. A well defined prompt can improve the performance and reliability of the model.
  3. Generalization: Prompt engineering helps AI systems generalize across different tasks and domains by encouraging them to rely on their understanding of language and context instead of exploiting quirks or biases present in the training data.
  4. Adaptability: With well-designed prompts, AI systems can become more adaptable to different tasks, making them more versatile across various applications.
  5. User experience: Prompt engineering lets us create AI systems that are more intuitive and user-friendly. By understanding the nuances of human communication, these models can respond to user inputs more effectively and deliver a better overall user experience.
  6. Reduction of biases: With prompt engineering, we can guide Generative Models to produce outputs less prone to biases. AI systems can be designed to avoid perpetuating harmful stereotypes and biases by providing more precise instructions and incorporating fairness considerations.
  7. Safety: One of our major concerns about Generative AI is its safety. Crafting effective prompts can help address safety concerns associated with AI-generated content. We can reduce the likelihood of generating inappropriate, offensive, or harmful content by providing specific instructions and limitations.
  8. Interdisciplinary applications: Prompt engineering can make a significant impact across various industries and research fields, including healthcare, finance, education, and entertainment. By tailoring prompts to specific domains, AI systems can be optimized to address unique challenges and requirements in their respective fields.
  9. Rapid development and deployment: One of the most significant tasks in an AI Application development is fine-tuning a model to make it work for a specific application. Prompt engineering can accelerate the development and deployment of AI applications by reducing the need for extensive fine-tuning or training of the model, thus saving time and resources and making AI systems more accessible and cost-effective.

Connection to language models and AI systems

Language models, such as GPT-3 or BERT, are AI systems trained on vast amounts of text data to generate human-like responses based on the input they receive. These models use the context provided in the input prompt to generate appropriate output. Prompt engineering is intimately connected to these models, as the quality of the input prompt significantly influences the model’s performance and the resulting output.

By crafting effective prompts, users can better utilize the capabilities of these AI systems to deliver more targeted and accurate results.

How Prompt Engineering is Used

We saw that prompt engineering is all about developing well-crafted prompts that will help AI systems generate more accurate, relevant, and meaningful responses across various applications. In this section, we will go over few key aspects of Prompt engineering. 

A. Identifying the Problem and Desired Output

The initial step in prompt engineering is pinpointing the problem and establishing the desired output. This process involves outlining the task you want AI systems to accomplish and determining the required output format. Identifying these elements helps create a solid foundation for crafting effective prompts that guide the AI system toward the desired outcome.

B. Crafting Effective Prompts

Three key aspects should be considered while developing  prompts for AI systems

  1. Clarity and conciseness: First and foremost, ensure the prompt is clear and concise, providing sufficient context for the AI to grasp the task at hand without becoming excessively verbose or ambiguous. Straightforward and brief prompts allow AI systems to focus on the problem and generate relevant responses.
  2. Explicit instructions: Generative AI Systems are built to be generic. So it is essential to incorporate specific instructions within the prompt to steer the AI system toward the desired output. Explicit instructions can include specifying output length, required information, or the presentation format for the output.
  3. Encouraging elaboration and reasoning: A recommended strategy to generate a more insightful and comprehensive response is to prompt the system for explanations or examples that substantiate its conclusions. This can significantly enhance the quality and value of the generated output, making it more informative and useful for your specific needs.

C. Iterative Refining of Prompts

Prompt engineering is a step-by-step process that involves refining the prompts used to interact with a Generative AI system. This is done by evaluating the initial output of the AI based on the prompt, identifying areas for improvement, and adjusting the prompt accordingly. This refining process is repeated until the desired outputs are achieved, which ultimately leads to an enhancement in the performance of the AI system.

Examples of Prompt Engineering Applications

Prompt engineering has an ever-expanding wide range of applications across various industries and fields. Let us look at a few examples that demonstrate its versatility.

  1. Content generation: AI systems can be guided to create engaging and relevant content for blogs, social media, and marketing materials. Specific prompts can outline the topic, target audience, and desired tone to ensure the generated content aligns with the intended purpose.
  2. Sentiment analysis: AI systems can more accurately detect sentiment behind a piece of text—such as positive, negative, or neutral—when given well-crafted prompts. This capability can be leveraged in understanding customer feedback or analyzing social media trends.
  3. Question answering: AI-powered chatbots and virtual assistants can benefit from effective prompts that enable them to provide more accurate and contextually relevant answers to user questions. This improvement leads to better user experiences and increased trust in AI systems.
  4. Data labeling: Labeled data is critical for training machine learning models. Prompt engineering can help AI systems generate more accurate and consistent labels for datasets, streamlining the data preparation process and improving model training.

Prompt engineering plays a vital role in maximizing the capabilities of large language models across various industries. Users can generate more relevant and accurate outputs by crafting specific prompts that align with the intended purpose. The applications of prompt engineering will continue to expand and shape the future of AI.

Significance of Prompt Engineering 

Prompt engineering has become a crucial technique in the ever-evolving landscape of artificial intelligence. It is vital in enhancing AI capabilities, reducing biases and safety concerns, facilitating human-AI collaboration, and revolutionizing various industries and research fields.

Prompt engineering shapes how we interact with AI systems, enabling us to generate more relevant and accurate outputs by creating specific prompts that align with the intended purpose. As a result, prompt engineering has become integral to maximizing the effectiveness of large language models.

As AI continues to advance, prompt engineering will play an increasingly important role in shaping the future of AI and unlocking new possibilities across various industries. By reducing biases and facilitating human-AI collaboration, prompt engineering can improve the quality of life and work for people worldwide.

A. Enhancing AI Capabilities

Prompt engineering empowers AI systems to perform at their full potential by guiding them to produce more accurate, relevant, and context-aware responses. By optimizing input prompts, we can unlock the true capabilities of AI systems, leading to better performance and more reliable results.

B. Reducing AI Biases and Safety Concerns

One of the significant challenges in AI development is mitigating biases and addressing safety concerns. Prompt engineering offers a way to guide AI systems in generating outputs less prone to biases and stereotypes. By incorporating fairness considerations and more precise instructions, we can create AI systems that promote ethical use and avoid perpetuating harmful stereotypes.

C. Facilitating Human-AI Collaboration

Prompt engineering is essential for building AI systems that seamlessly collaborate with humans. By designing more intuitive and user-friendly prompts, AI systems can better understand and respond to human inputs, leading to more effective communication and cooperation. This enhanced collaboration ultimately results in a more satisfying user experience.

D. Impact on Industries and Research Fields

Prompt engineering has a transformative impact on various industries and research fields, with applications spanning from medicine to entertainment. Here are a few key sectors where Prompt engineering is making a difference:

  1. Medicine: Prompt engineering can help AI systems deliver more accurate diagnoses, recommend personalized treatment plans, and synthesize complex medical information for patients and healthcare professionals.
  2. Finance:  Promost engineering can help AI systems to improve risk assessment, fraud detection, and investment analysis. By crafting targeted prompts, AI can deliver more accurate predictions and insights, enabling better decision-making.
  3. Education: Prompt engineering can guide AI systems in creating personalized learning plans, providing instant feedback on assignments, and assisting educators in identifying areas where students need additional help.
  4. Entertainment: In the entertainment industry, AI systems can leverage prompt engineering to generate engaging content, create realistic virtual worlds, and develop personalized user recommendations.

The significance of prompt engineering in the current world is immense, as it continues to redefine our interactions with AI systems and push the boundaries of what AI can achieve. By mastering prompt engineering, we can unlock new possibilities and drive advancements in various industries, ultimately shaping a more innovative and connected world.

Challenges and Limitations

Despite its transformative potential, prompt engineering has challenges and limitations. In this section, let us explore the inherent biases in language models, the difficulty in achieving precise control, and the issues surrounding scalability and generalizability.

A. Inherent Biases in Large Language Models

Language models are trained on vast amounts of text data, often containing biases and stereotypes in the real world. Consequently, these biases may inadvertently influence AI systems when generating responses. While prompt engineering aims to reduce biases and create fairer AI systems, it cannot entirely eliminate the inherent biases present in the language models themselves. Addressing this challenge requires a multifaceted approach, combining prompt engineering with advances in model training and data curation to minimize biases and ensure ethical AI use.

B. Difficulty in Achieving Precise Control

Since most of these AI Models are very generic, achieving precise control over AI-generated outputs is often challenging. While well-crafted prompts can guide AI systems toward more accurate and contextually relevant responses, attaining complete control over the generated content remains difficult. Even with carefully designed prompts, AI systems may still produce unexpected or undesirable outputs. This limitation will require continuous refinement of prompts and ongoing research into better techniques for controlling AI system behavior.

C. Scalability and Generalizability Issues

Prompt engineering is an iterative process often involving trial and error, making it time-consuming and resource-intensive. This approach can raise scalability issues, particularly when working with large-scale AI systems or applications requiring numerous prompts. Moreover, crafting effective prompts for one specific task or AI system may not guarantee generalizability to other tasks or systems. Hence there is a need to strike a balance between creating customized prompts for each use case and developing general strategies that can be adapted across various use cases.

While prompt engineering has the potential to revolutionize our interactions with AI systems, it is essential to acknowledge and address its challenges and limitations. By understanding the inherent biases in language models, working towards achieving precise control, and addressing scalability and generalizability issues, we can continue to refine and advance prompt engineering techniques, ultimately unlocking new possibilities in the world of artificial intelligence.

Exploring New Horizons: Future Directions and Opportunities in Prompt Engineering

As we continue to unlock the potentials of Generative AI, several promising future directions and opportunities await, offering exciting prospects for further advancements. This section will see what the future holds for Prompt engineering. 

A. Research Advancements in Prompt Engineering

As AI systems and language models continue to evolve, research is needed to develop more effective and sophisticated prompt engineering techniques. Future advancements in this domain could include

  • Creating new methods for optimizing prompts.
  • Developing AI-assisted Prompt engineering tools.
  • Exploring techniques that allow for precise control over AI-generated outputs.

I believe these research advancements will help overcome existing challenges and limitations.

B. Interdisciplinary Collaborations

Collaboration across various fields, including linguistics, psychology, and computer science, is a new development area in prompt engineering. Experts from different disciplines can combine their perspectives and expertise to create more effective prompts that account for diverse contexts and nuances. Such collaborations can lead to innovative solutions that address biases, ethical considerations, and usability concerns, driving the field of prompt engineering forward.

C. Open-source Initiatives and Community Involvement

Open-source initiatives and community involvement are crucial for the growth and development of prompt engineering. Researchers and developers can share resources, knowledge, and tools to advance the field, identify best practices, and promote innovation. Open-source initiatives can also facilitate the adoption of prompt engineering techniques by developers and organizations worldwide. Encouraging community involvement ensures that diverse perspectives and experiences are considered.

Prompt engineering holds immense promise, with opportunities for research advancements, interdisciplinary collaborations, and open-source initiatives. Collaboration and innovation can shape the future of AI, unlocking new possibilities in various industries and fields. As we look ahead, the potential for prompt engineering to transform our interactions with AI systems is exciting, paving the way for a more connected and intelligent world.

Final thoughts

In conclusion, prompt engineering is a critical aspect of maximizing the effectiveness of large language models, and its potential to transform our interactions with AI systems is fascinating. Creating specific prompts that align with the intended purpose allows users to generate more relevant and accurate outputs, leading to better user experiences and increased trust in AI systems.
Furthermore, interdisciplinary collaborations, open-source initiatives, and community involvement are crucial for the growth and development of prompt engineering. By combining expertise from different fields and sharing resources, knowledge, and tools, we can collectively advance the field and unlock new possibilities across various industries and research fields.
As AI continues to evolve, prompt engineering will play an increasingly important role in shaping the future of AI, and there will always be new developments and techniques to explore. By adopting these opportunities and fostering a spirit of collaboration and innovation, we can continue improving the quality of life and work for people around the world through AI.

Resources to learn more about Prompt Engineering

BlockChain Fundamentals Part 1


Let’s take an example where Person A is transferring $50 to Person B.
Person A sends a request to his  Bank for initiating the transfer, Bank verifies the request and if everything is ok then subtracts $50 Person A ‘s account and adds $50 to the account of Person B. Then bank updates its ledgers to reflect these changes.

Transaction in a Centralized Ledger

If you look at any of the transaction that we do these days are mostly handled by one or more servers managed by a single entity. This entity could be your bank, a social media or an online shop, and they all do a standard action, record your transactions in a centralized database.

Centralized Ledger

You might already be thinking that these entities do not really use a single server or ledger to perform these transactions, that is true even if they have multi-geographic fault tolerant sophisticated server farms to store their ledgers they all are managed and controlled by a single entity. In other words, the entities who records these data has the full control to do any action with their data.

Person A in the above case has to trust that the bank he transacts with will act as he expects, Though this system has been working for some time, there are few drawbacks for this type of transactions and ledgers.

  1. The entity who owns the ledger has the full control over it and can manipulate the ledgers at it is on will without its customer’s permission.
  2. The records in this ledger can be easily tampered with by someone who has access to it, means if someone makes some malicious changes to this ledger that can affect everybody who is relying on that ledger. (for example, someone can hack into the bank’s centralized ledger and modify transactions).
  3. Another disadvantage is the single point of failure. For example, if the bank decides to shut down their service users will not be able to perform transactions.
    In the extreme case where a natural disaster wipes out their all data centers, all the transactions from the ledgers will be lost.
  4. Centralized Ledgers stores data in Silo’s, means, for example, your bank has its ledger, an auditor has his ledger, tax authorities have its ledgers, and they are never synchronized.
  5. Though it is a centralized ledger from the general point of view, the organizations will have to spend too much of money to build redundancy and scaling to their ledgers. That makes this very expensive

What is BlockChain

In simple terms, Blockchain is a distributed ledger of transactions. All the transactions in the blockchain are encrypted and synchronized between the participants.

Blockchain-workings-explained

Key points about Blockchain

  1. It is a Distributed Ledger.
  2. Members of a blockchain network are called Nodes.
  3. Each node has the copy of the full ledger.
  4. Nodes use Peer to Peer Network for synchronization.
  5. All information on the Distributed ledger is Secured by Cryptography.
  6. By design Eliminates the need for a Centralized Authority to validate transactions by performing peer validations before any transactions
  7. The transactions are added to ledger based on Consensus from nodes.
  8. Each valid transaction in the Blockchain is added to a Block 
  9. Blockchain miners create Block
  10. Multiple Blocks make a Blockchain.
  11. All Blocks in Blockchain are Immutable.
  12. It ensures the Complete Audit trail of the whole transactions (Verifiability)

I know this is too much to chew on, I promise you will get all these concepts by the end of this article. So be with me and let’s move on…

Distributed Ledgers  (DLT)

According to Wikipedia “A distributed ledger is a consensus of replicated, shared, and synchronized digital data geographically spread across multiple sites, countries, or institutions. There is no central administrator or centralized data storage.”Distributed Ledger

In the Distributed ledger each participating member has a copy of the ledger, In simple terms during any transactions, it updates the ledger of the sender and receiver then broadcast the transaction, The transaction details are updated in the ledgers of all of its participants using peer to peer network.

Peer to peer (P2P) network

Peer to Peer (P2P) is a decentralized communication model where each participating nodes will have the same capabilities. Unlike the client-server model, any node in the peer to peer network can send requests to other nodes and respond to requests by other nodes. The best example for a peer to peer network is BitTorrent.

Peer to peer Network

Any peer can perform a transaction on the P2P network this means When one node can perform a similar transaction at the same time it will end up in conflicts. This is called double-spend problem. Blockchain uses a consensus system to resolve these kinds of conflicts.

We will discuss the double-spend problem and consensus systems in detail in a later post. Let ‘s focus on the basic concepts first.

Cryptography in Blockchain

Due to its nature, any data in a Blockchain is visible to all the members of the network, this makes this data vulnerable and hackable, however, Blockchain uses cryptography to make all the transactions extremely safe and secure.

In simple terms, Cryptography is used for obfuscating (encrypting and decrypting) data. Blockchain leverages two cryptographic concepts in its implementation.
They are the following

  • Hashing
  • Digital Signature.

What is Hashing

Hashing is a mechanism where any input is transformed to a fixed size output using a hashing algorithm. The input could be of any file type, for example, you can generate a hash of an image, text, music, movie or a binary file.

Hashing-Representation

Whatever may be the size of the file, hashing algorithm guarantees that the output is of a fixed size. (For example, if you create an SHA 256 hash of any file the hash will always be 256 bits.)

Key properties of Hashing.

Any hashing algorithm should adhere to the following principles.

Determinism
  • For a given value the algorithm should always produce the same hash value.
For example, the SHA 256 Hash of the word "Hello World"  will be always
A591A6D40BF420404A011733CFB7B190D62C65BF0BCDA32B57B277D9AD9F146E
Pre-image resistance
  • This means it should be computationally hard or impossible to decrypt the input from the output.
In the above example, we saw the hash value
A591A6D40BF420404A011733CFB7B190D62C65BF0BCDA32B57B277D9AD9F146E
represents "Hello World". 
It is impossible to decrypt the word Hello World from the above hash value.
Second pre-image resistance
  • This means the hash generated with input should not match with a hash value of a hash value of a different input
    For example function hash(“Hello World”) != function hash(“XXX”) where “xxx” is any input other than word “Hello World”
Collision Resistance.
  • This means it should be hard or impossible to find two different input (of any size or type) to have the same hash. Collision Resistance is very similar to Second pre-image resistance.

Hashing is commonly used to find the checksum of a file. For example, when you are downloading software from a server the software vendor provides the checksum or Hash of the software package. If the hash of the downloaded software is matched with the hash given by the provider of the file, we guarantee that the software was not tampered with.

Blockchain uses Hashes to represent the current state of the blockchain. Each block in the blockchain may have hundreds of transactions and verifying each transaction individually is going to be very expensive cumbersome. So Blockchain leverages Merkle root to verify the transactions.

Merkle tree of a Block

In a Merkle tree, each non-leaf node will have the hash of their child nodes. Look at the below diagram to understand the concept of Merkle tree.

MerkleRoot

Each block in the blockchain has the Merkle root of its transactions and the hash of its previous block. The hash of Merkle root can be used as a definitive mechanism to verify the integrity of the block as even the slightest changes to any of the records in this tree will alter the value of the original Merkle Root.

Block Structure in BlockChain

In other words, the entire state of a Blockchain system can be validated by the hash of its last block which is of 256 bits.

What is a Digital Signature

A classical example of Digital Signature is the website traffic using HTTPS protocol using SSL. SSL uses the digital signatures to ensure the authenticity of the server.

A User generates a digital signature by generating a Public and Private key Pair.

Generated Key

A Public key an a Private key has mathematical relations that tie each other. The private key should be kept as secret and should be used for signing messages digitally. A public key is intended to be distributed publically which should be used by the message recipient to validate the authenticity of the message.

Sending a Message with Digital Signature

Sender signs all his transactions with his generated private key. This will ensure that only the owner of the account with the private key can do the transaction.

Verifiying a message with Digital Signature

The Reciever or any nodes in the Blockchain verifies the transaction by checking the digital signature of the transaction using the public key.

Key Points to Remember about Hashing in Blockchain

  • Hashing is used for verifying the integrity of the transaction
  • The digital signature is used for verifying the identity of the performer of a transaction. 

For learning more about the Cryptography and Hashing in Blockchain, please visit the following links

https://blockgeeks.com/guides/cryptocurrencies-cryptography/

https://blockgeeks.com/guides/what-is-hashing/


What is a Block

We saw that Blockchain is a group of blocks in sequential order, Let’s take a quick peek at what a block is? In simple terms Block is a group of valid and verified transactions. Each block in the blockchain is immutable. Block miners will continuously process new transactions, and new blocks will be added to end of the chain. Each block will have the Hash of the previous block thus ensuring the integrity of the chain.

Block Structure in BlockChain

Every block in a blockchain will have the following information

  • Hash of the previous Block
  • Timestamp at which the block was created
  • List of the transactions that were part of that Block
  • Merkle tree of all the transactions in that block
  • Nonce – (A random String generated by miner)
  • Hash of header of the block which will be used as the Hash of the previous block in the next block

Let’s take a look at a real example of a Block; As you all know Bitcoin is based on Blockchain, To explain a Block I am referring to a block from bitcoin transaction.  I had taken a real Block from Bitcoin Blockchain network from  www.blockchain.info

You can see that this block has a header and a list of transactions. Transactions and its details can vary depending upon the blockchain implementation since this is a Bitcoin block you will see Bitcoin transactions.

Bitcoin Block Sample

Let’s go through each some of the key datasets in this block header

Field name Summary
Block Id Unique ID representing this block
Number of Transactions Total number of transactions recorded in that block
Height Total number of blocks preceding to this block on that blockchain, (in this case, there are 505234 blocks created before this block)
Timestamp Time at which this block was created
Relayed By Miner who mined this block
Transactions This shows the hash of each transaction with its details

Genesis Block

We learned that any Block will always have the hash of its previous block attached to it. So its implied that First block (Block 0) of a Blockchain will not have a previous block, and this is known as Genesis Block. Genesis block is almost hardcoded into the applications that use that blockchain.

Click here to see the Genesis Block of Bitcoin

Block time

Block time is the time taken to mine a block. Block time varies from implementation to implementation in the blockchain. To provide security and prevent forking each implementation defines its own block time. For Bitcoin, the block time is 10 mins whereas for Etherium its around 20 seconds.

What is in a transaction

Again I am taking a bitcoin transaction to explain a transaction record in Blockchain.  The below transaction shows the transfer of a bitcoin from one address to two recipients which was part of the Block that you had seen in the previous image.
The Tree diagram below shows the related transfers of those bitcoins.  This related or audit trail of the asset (Bitcoin in this example) ensures the legitimacy of this transaction.

Bitcoin Transaction

What is Block Mining

In simple words, Miners are the one who runs a specialized version of the Blockchain software which can add a Block to the Blockchain. The miners get rewarded each time when they add a block to the Blockchain. There may be multiple miners who will compete with each other to create a Block in the blockchain in a network.

Blockchain Network with Nodes and Miners

Miners keep transaction pool of unconfirmed transactions tries to wrap around them to create a block. They will have to solve a mathematical puzzle to add the block to the blockchain. These mathematical puzzles are to create a hash of the block of a  particular nature.

The constants in this puzzle are the following

  • Previous Block Hash
  • Time Stamp
  • Merkle root

The variable in this puzzle is the Nonce.  A nonce is a fixed size string that can include both numerical and characters.

Miners keep trying new nonce till they solve this puzzle and the first miner who can solve this wins and broadcast to the network and the block is added to the chain. Then the nodes add this block to their ledger.

Currently, Bitcoin and Etherium use an algorithm called proof-of-work to mine a block,

Below is the illustration of the proof-of-work algorithm. 

Proof of work - Bitcoin Implementation

There are multiple algorithms used by blockchain network for mining and consensus,  We will discuss those algorithms in detail in a different post.

Few Usages of Blockchain

The blockchain is an emerging technology and there are a ton of use cases that we can solve with this.  Here are the few use cases that can leverage the power of Blockchain.

  • Health care records sharing
    Privacy of Personal health records is a major concern right now, think about a system where your personal health records can be stored in a blockchain and shared with the doctors within seconds.
  • Insurance Claim Processing
    We know that Insurance industry is prone to lots of fraudulent claims and fragmented sources of data.  Chances of error (intentional or unintentional) are very high. With the Blockchain, we can have more transparent and error-free insurance claims.
  • Payments and Banking
    The Major issue in the banking and payment sectors are fraud and money laundering. With the transparency Blockchain provides we should be able to eliminate most of these.
  • Voting systems
    We saw earlier that Blockchain is tamper proof or tamper evident. Implementing a blockchain based voting system can create an unhackable voting system.
  • Smart Contracts
    Smart contracts allow the self-execution of contracts. (I will cover smart contracts with examples in a different post). Blockchain not only eliminates the need for third-party for smart contract enforcement but also enforces the terms of a contract when the terms are met.

If you look at these use cases the blockchain will be a great choice if we auto validate the transactions using smart devices. In other sense, in order to leverage the full potential of Blockchain, we need to have more IoT sensors which can validate many of these “transactions”.

Food for thought

Imagine you go to a grocery store and pick up a bottle of organic, you are not sure whether this milk is really organic or not, you take your smartphone and scan the QR code of the batch id of the milk, The application lists out the details of dairy farm along with the cattle food they are buying, health history of the all the cows in the farm along with their medical records where you can trace back to each and every detail of what the farm states about the milk… that is the future that Blockchain  can offer.

Implementations

If you look at the Gartner Hype Cycle for emerging technologies 2017 you can see that Blockchain is slowly moving to Trough of Disillusionment and they are expecting this to mature to mainstream adaptation in 5 to 10 years. My feeling is this could be shorter.

Emerging Technology Hype Cycle for 2017_Infographic_R6A
Picture Courtesy – Gartner.com

Having said that, there are a ton of development going on in the Blockchain field, Both large and small scale players are bringing lots of innovation to the blockchain technology.

Below are few of  the current major implementations of blockchain

Bitcoin

I don’t think Bitcoin needs an introduction, It is the first digital cryptocurrency and leverages the blockchain technology.

Ethereum

Etherium is an open-source blockchain platform for Blockchain applications and smart contracts. It is the first majorly accepted Blockchain platform. We will cover more on Ethereum in a different post.

We covered the basics of Blockchain here, This is just a beginning. Stay tuned more articles where we will get to more depth on some of the topics that we discussed here.

Thanks for reading and Please leave your feedback in the comments section.

Exploring MongoDB Stitch… Backend as Service !!!


In my long software development career, I always felt that the most significant time taking task is not building the actual business logic, but the amount of code that you need for your basic housekeeping tasks or in other words the “basic chores” of an application developer.

Initially, these chores included session management, memory management, thread management, etc. With the introduction of boilerplate codes and frameworks, etc. these chores have been drastically reduced. Thus making and the development teams to focus more on the core business functionality.

However, most of the team end up re-inventing the wheel by writing these lots of essential features over and over again such as user Authentication, sending notifications to your customers, etc.

If the above mentioned is the story of Enterprises, the story is a lot worse for startups, The Problem of unwanted chores is a big predicament for them, as most of the time, they are starting the scratches.

If you look at the Basic chores, they include the following.

  • Authentication
  • CRUD of Data
  • Fine-Grained data access
  • Integrations with other services

Now looking at the overall cost of these chores it is not just the development time and effort, but the increased the code complexity, Testing efforts, etc.

MongoDB Stitch

MongoStitch is meant to address precisely these problems. In the last Annual Developer conference, MongoDB introduced MongoDB stitch as an addition Mongo Atlas, their Cloud-based “database-as-service.”

Though Atlas is available on most of the platforms, MongoStitch is currently supported only on AWS US East 1 region and its tailored as an add-on to the existing  MongoAtlast subscription.

Mongostitch allow you to create an application from your Atlas console, configure it by enabling the following

  • Add new features to your existing application
  • Control the access to data for user
  • Integrate with other services

Once you set up your MongoStitch application on the Console, you can use create its client application and start calling the MongoStitch functionalities from your application using the stitch client.

Currently, Stich clients are available for the following platforms

  • Browser and Node (JavaScript)
  • Android Application (java)
  • iOS Application (Swift)

Mongo had done an excellent job of providing detailed documentation, and you can use the getting started guide to build  sample applications

Though MongoDB Stitch is still in Beta, the features it offers looks very promising. Let us explore few of these features.

Collection/ Field-level permissions

MongoDBStitch allows the developer to specify the access for the collections.  These access rule could be defined either for the collection itself or can be specific to each field in the collection, Though you can specify these rules be aware that they will be overridden by the access that you would have provided at the MongoDB level

Stitch Admin Console 2018-01-04 19-59-16

Service Integrations

This is my favorite part, Integrating with other services are a breeze. As of Jan 2018, the following service integrations are supported by Stitch

Service Name Supported Services
S3 Upload file to S3, generate signed URL
Amazon SES Send Email
Github Webhooks
HTTP Services Basic HTTP calls (get, post, delete, put, patch, head)
GCM Push notifications to Apple and Android devices
Twilio Send and Receive Text Messages

Each of these service calls can be configured to have its own rules

Stitch Admin Console Service Integration

Authentication Services

User Authentication services using the following

  • E-mail and inline password
  • Google
  • Facebook
  • API Keys
  • Custom/Third party Authentication.

These integrations are super easy, and I was able to create a google and facebook login integration for my sample app within 15 minutes.

Stitch Admin Console Authentication

Values (Constants)

These are named constants that you can use in Stitch functions and rules.

Stitch Admin Console Values

Functions

The Functions in MongoDB stitch is written in Javascript and can be edited and tested using built-in function editors.

As of now, the ECMAS version 6 is not supported in functions.

Stitch Admin Console 2018-01-06 12-32-24


After playing around with MongoStich for a few days, I feel that this has lots of potentials and it can definitely improve productivity and helps you to focus on the core business logic.

Final Words

During my POC I wanted to extend my logged in users with additional attributes say for example I wanted to capture the address and phone numbers of my user,  however, MongoStich is saving the users in a different mechanism that is not really extendible and not visible as a collection.

Stitch Admin Console users

If Mongostitch allows the user information to be saved to a table that can be extended with any extended attributes that I wanted to add to that users it will make the life more easier for developers.

Here are some useful links on MongoDBStitch

Documentation: https://docs.mongodb.com/stitch/

Tutorials and Getting Started Guide: https://docs.mongodb.com/stitch/getting-started/

Let me know your thoughts …

Setting up Cloudwatch for Custom logs in AWS Elastic Beanstalk


Amazon Cloudwatch monitoring services are very handy to gain insight into your application metrics,  besides metrics and alarms you can use this to go through your application logs without logging into your server and tail the logs.

I ran into few issues when I was initially setting up Cloudwatch for my custom logs in the Elastic Beanstalk  Tomcat Application.  I will walk you through the whole process on this blog.

Setting up your application

In this example, I am using a Spring boot Application which will be deployed in ElasticBeanstalk Tomcat container.

.ebextension file

First, you need to create a .ebextention file for your application
Here is a working sample of the .ebextension file


files:
"/etc/awslogs/config/mycustom.conf" :
mode: "060606"
owner: root
group: root
content: |
[/var/log/tomcat8/mycustomlog.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/log/tomcat8/mycustomlog.log"]]}`
log_stream_name = {instance_id}
file =/var/log/tomcat8/mycustomlog.log*

The above configuration will create a custom configuration to copy logs from /var/log/tomcat8/mycustomlog.log to a log group named for my application and will copy over all the logs with the pattern mycustomlog.log

This line creates a configuration file mycustom.conf in the /etc/awslogs/config/mycustom.conf location. Once deployed you can SSH to this location to view your configuration.


files:
"/etc/awslogs/config/mycustom.conf" :

The following lines create the log groups and create the scripts to copy over the files to cloudwatch


content: |
[/var/log/tomcat8/mycustomlog.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/log/tomcat8/mycustomlog.log"]]}`
log_stream_name = {instance_id}
file =/var/log/tomcat8/mycustomlog.log*

Make sure that you check your .ebextension is a valid yaml before deploying this to your application environment.  I use http://www.yamllint.com/ to check the validity of my YAML’s 

Place your .ebextension file in the /src/main/resources/ebextensions/ folder of your project

Screenshot1

Gradle Script

Now you need to update your Gradle scripts to make sure that you package your .ebextnsion file along with your war file

Update your Gradle Script to include the ebextension in the root of the file


war {
       from('src/main/resources/ebextensions') {
       into " .ebextensions";
   }
}

With this gradle script, your war file should have a .ebextensions folder in the root and should have the mycustom.conf file in it.

Now let’s prepare your Elastic Beanstalk to enable the cloudwatch

Prepping up your Elastic Beanstalk  Environment

To enable Cloudwatch for Elastic Beanstalk you need the following

  1. Permission for Elastic Beanstalk to create log group and log stream
  2. Enable the Cloudwatch on the Elastic Beanstalk application

Login to your AWS Account, go to IAM and create a new Policy  similar to the following

Grant Permission to Elastic Beanstalk

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "CloudWatchLogsAccess",
            "Action": [
                "logs:CreateExportTask",
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:DescribeDestinations",
                "logs:DescribeExportTasks",
                "logs:DescribeLogGroups",
                "logs:FilterLogEvents",
                "logs:PutDestination",
                "logs:PutDestinationPolicy",
                "logs:PutLogEvents",
                "logs:PutMetricFilter"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:logs:*:*:log-group:*"
            ]
        }
    ]
}

Now attach this policy to “aws-elasticbeanstalk-ec2-role”

Enable CloudStream on your Elastic Beanstalk application

Go to your Elastic Beanstalk Application, Edit Software Configuration in the Configuration Menu

Configuration 2017-12-10 16-21-55

Enable Cloudwatch Logs from the settings

Configuration 1 2017-12-10 16-21-55

Once you do this the AWS will re-configure the system, now you deploy the war file created from the Gradle script.

Usually, AWS picks up the configuration after you deploy the new war file. if not restart the environment.

Go to the cloudwatch to verify your log stream

Troubleshooting Tips

As I said before I had issues while I was setting this up, if your configurations are not getting picked up go ahead with the following steps to  troubleshoot this issue

  • Make sure that your YAML is valid.
  • SSH into the Environment and make sure that the file created in the location /etc/awslogs/config/mycustom.conf is valid.
  • Check eb-publish-logs.log to see if it has any errors
  • Finally, if nothing works rebuild your environment.

Service Based Objects (SBO’s) in Documentum


Documentum Business Object Framework which was introduced from Documentum 5.3 plays a key role in most of the current Documentum implementations.  Service-based Object is one of the basic members of Documentum BOF family.  Let’s try to see what makes Service Based Objects very popular and how can you implement it.

What is an SBO

In simple terms, SBO in Documentum can be compared to session beans of the J2EE environment.  SBO enable the developers to concentrate just on the business logic, and all the other aspects will be managed for you by the server. This reduces the application code significantly and reduces lots of complexities. The most significant advantage of a BOF that it’s deployed in a central repository. The repository maintains this module and DFC ensures that the latest version of the code is delivered to the client automatically.

Service-Based Objects are repository and object type in-depended that means the Same SBO can be used by multiple Documentum repositories and can It can retrieve and do operations on different object types. SBO’s can also access external resources, for example, a Mail server or an LDAP server. Before the introduction of Documentum Foundation Services, SBO’s were commonly used exposed to expose Documentum web services.

An SBO can call another SBO or by any Type based Objects. (Type Based Objects (TBO) are a different kind of Business Object types which I will explain in a separate study note)

A very simple to understand example for an SBO implementation would be a Zip code Validator. Multiple object types might have Zip code across multiple repositories.  So if this functionality is exposed as an SBO, it can be used by the custom application irrespective of Object types and repositories. This Validator SBO can be used even by different TBO’s for validations.

Here are some bullet points about SBO’s for easy remembering

  • SBO’s are part of Documentum Business Object framework
  • SBO’s are not associated with any repositories
  • SBO’s are not associated with any Documentum object types.
  • SBO information is stored in repositories designated as Global Registry.
  • SBO’s are stored in /System/Modules/SBO/<sbo_name> folder of repository. <sbo_name> is the name of SBO.
  • Each folder in /System/Modules/SBO/ corresponds to an individual SBO

How to implement an SBO using Composer

The steps to create an SBO are these.

1) Create an interface that extends IDfService define your business method
2) Create the implementation class implement write your business logic, This class should extend DfService and implement the interface defined in Step 1
3) Create a jar file for the created Interface and another jar for the implementation class then create Jar Definitions
4) Create an SBO Module and Deploy your Documentum Archive using Documentum Composer (Application builder for older versions)

Let’s see these steps with an Example SBO Zip Code Setter, I am not covering the steps using application builder here. The screenshots and the notes will give you an insight into how to use Documentum Composer to implement a Service Based Object in Documentum version 6 or above.

Step 1: Create an interface and define your Business method

The first step is to create an interface which will define the business functionality. This interface should extend IDfService interface. The client application will use this interface to instantiate the SBO.

Click New –> Interface in Documentum Composer. Click on the Add button of Extended Interfaces and search for IDfService. Select IDfService and click OK

image

Now Add the Business method ValidateZipCode() to an interface. The code should look like the following.

package com.ajithp.studynotes.sbo;

import com.documentum.fc.client.IDfService;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfException;

public interface IZipValidatorSBO extends IDfService {

public void validateZipCode (IDfSysObject obj, String zipCode, String repository)throws DfException;
}
Step 2: Create the implementation class

All the Service Based Object implementation classes should extend from DfService class and implement the Interface created in the first step.  DfService class is an abstract class There are few methods which were abstract in 5.3 and has provided with a default implementation in 6.0 and later

Method Name Returns More information
getVendorString() String This method’s default implementation returns a empty String. Override to make changes to it.
getVersion() String This method returns a version which is not right, Override this method to return Major.minor version.
isCompatible() boolean The default implementation returns true if the version is an exact match

Let’s see some other important methods of DfService Class before we move further.

Method Name Returns More information
getName() String This returns the fully qualified logical name of the service interface
getSession() IDfSession This method returns IDfsession Object for the docbase name which is passed as argument to this method. You have to make sure that you call releaseSession() after you are done with the operation that involves session.
releaseSession() Releases the handle to the session reference passed to this method.
getSessionManager() IDfSessionManager Returns the session manager.

Managing repository sessions in SBO As We saw the previous table its always good practice to release the repository session as soon as you are done with its use. So the ideal way to do this should be like this.

// Get the session 
IDfSession session = getSession(repoNam);
try {
// do the operation with session
} catch (Exception e){
// Process the exception 
}finally {
// release the session 
releaseSession(session)
}

Transactions in SBO

Another important thing is to know is how to handle transactions in SBO. Note that only session manager transactions can be used in an SBO. The system will throw an Exception when a session based transaction used within an SBO.

beginTransaction() will start a new Transaction and use commitTransaction() to commit it or abortTransaction() to abort a transaction.  Always ensure that you are not beginning a transaction where another transaction is active. You can use isTransactionActive() to find out whether a transaction is active or not.

Another important point is if your SBO doesn’t start a transaction don’t commit it or abort it in the SBO Code instead if you want to abort the transaction use setTransactionRollbackOnly() method.

Other important points

1) Since SBO’s are repository independent, do not hardcode the repository names in the methods. Either pass the repository name as a method parameter or have it as a variable in SBO and use a setter method to populate it after instantiating

2) Always try to make SBO’s stateless (Its a pain to manage state full SBO’s ).

3) Don’t reuse SBO, Always create a new instance before an operation.

Now let’s see how to code our ZipSetterSBO

Click on New –> Class, Click on the Browse button of Superclass and Search and Select DfService and in the Interfaces search for the Interface created in the previous step and Click OK. Also, select the option Inherited Abstract Methods in Which method stubs would you like to create.

image

I had overridden method getVersion() for the illustration purpose. See the code sample for the inline comments.

package com.ajithp.studynotes.sbo.impl;

import com.ajithp.studynotes.sbo.IZipValidatorSBO;
import com.documentum.fc.client.DfService;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfException;

public class ZipValidator extends DfService implements IZipValidatorSBO {

public static final String versionString = "1.0";
// overriding the default 
public String getVersion() {
        return versionString ;
      }

public void validateZipCode (IDfSysObject obj, String zipCode, String repository) throws DfException {
     IDfSession session = getSession(repository);
     try {
     if (isValidUSZipcode(zipCode)){
         obj.setString("zipcode",zipCode);
         obj.save();
      }
     } catch (Exception e){
         /* Assuming that transaction is handled outside the code and this says DFC to abort the transaction 
         in case of any error */
        getSessionManager().setTransactionRollbackOnly();
        throw new DfException();
     } finally {
     releaseSession(session);
    }
  }
 private boolean isValidUSZipcode(String zipCode){
     // implement your logic to validate zipcode. 
     // or even call a external webservice to do that 
     // returning true for all zip codes
      return true;
   }
}
Step 3: Generate Jar files and Create Jar Definitions

The next step in SBO creation is to create Jar files which will hold the interface and the implementation classes. These jar files are required to deploy your SBO.

Use Composers/Eclipse Create Jar option or command line jar command to create the jar file

image image

image

Selecting the sbo package to create the interface jar

image

Selecting the com.ajithp.studynotes.sbo.impl for implementation.

Look at the Composers Export Jar screenshots for Interface and implementation (Refer Eclipse Documentation for more details). I think the figures posted above are self-explanatory.

The Command line to create a Jar file is jar cf <name_of_jar>, Please look at the Java Documentation for more details on switches and options of Jar command.

The creation of Jar Definitions is new step added in Composer.

1) In Composer change the perspective to Documentum Artifacts Click New –> Other –> Documentum Artifacts –> Jar Definition

image

2) Click Next  and Enter the name of for the Jar Definition and click Finishimage

3) Select Type as Interface if the jar has only interface, Implementation if the jar has the only implementation of interface or Interface and Implementation if the single jar file has both interface and implementation. Click on the Browse button and browse to the jar created in the last step.

In Our case create two Jar Definitions The first one with type as Interface pointing to Jar Created for SBO and a second one with type Implementation pointing to the implementation jar

untitled

Name the Interface jar def as zipcodevalidator and the implementation jardef as zipcodevalidatorimpl

Step 4: Create a Module and Deploy the SBO

In Composer change the perspective to Documentum Artifacts then Click New –> Other –> Documentum Artifacts –> Module

image

Give a valid name and leave the default folder and Click Finishimage

In the Module, edit window select SBO from the dropdown

image

Now Click on Add Section of Implementation Jars of Core Jars. A new pop up window will appear which will have a list of all the Jar definitions set to Type Implementation and Interface and Implementation. Select the one you wanted to use for ZipCodeValidatorSBO that is ZipCodeValidatorImpl.

image

Click on the Select Button near pointing to Class name and Select the implementation class. In this case ZipValidator

image

Now Click on Add Section of Interface Jars of Core Jars. A new pop up window will appear which will have a list of all the Jar definitions set to Type Interfaces and Interface and Implementation. Select the one you wanted to use for ZipCodeValidatorSBO that is ZipCodeValidator.

image

For more details of other options refer to Documentum Composer Manual. Save the Module.

Now right click on the project and install the Documentum project

image

Click on the Login button after logged in Click on Finish to start the installation.

image

 

Look at the Documentum composer documentation to know more about the Installation options.

How to use SBO from a Client Application

follow the below steps to instantiate an SBO from a client application.

1) Get the Local client

2) Create login info and populate the login credentials.

3) Create an IDfSessionManager object

4) Use the newService () from the Client Object to create an SBO instance

// create client
  IDfClient myClient = DfClient.getLocalClient();
  // create login info
  IDfLoginInfo myLoginInfo = new DfLoginInfo();
  myLoginInfo.setUser("user");
  myLoginInfo.setPassword("pwd");
  // create session manager
  IDfSessionManager mySessionManager = myClient.newSessionManager();
  mySessionManager.setIdentity("repositoryName", myLoginInfo);
  // instantiate the SBO
  IZipValidatorSBO zipValidator = (IZipValidatorSBO) myClient.newService( IZipValidatorSBO.class.getName(), mySessionManager);
  // call the SBO service
  zipValidator.validateZipCode(obj, zipCode, "repositoryName");

Download this Study Note (PDF)

Using Java reflection to reduce Code and Development time in DFS


 

Java reflections are one of the most powerful API’s of Java Language, this can be used to reduce code significantly.

Most of the Current Enterprise application consists of different layers and they use Value objects to transfer data from one layer to another. An inefficient way of using getters and setters of the attributes of Value objects can increase code and development time of application. Effective use of reflection can reduce code and development time significantly.

So let’s take a Scenario,  I have an Object type MyObjectType extending from dm_document with 50 additional attributes, so dm_document as of Documentum 6.5 has 86 attributes adding additional 50 attributes that means we have 139 attributes for this object type. Consider a standard Web Application using DFS behind which needs to manipulate (add or edit) instances of this object type, The Service needs to add all these attributes to the PropertySet  of the DataObject representing that instance. Then need to call the appropriate service.

 

Considering that the bean instance name of MyObjectType is myObjectBean the Standard code will  be something like this

  ObjectIdentity objIdentity = new ObjectIdentity("myRepository");
  DataObject dataObject = new DataObject(objIdentity, "dm_document");
  PropertySet properties = dataObject.getProperties();
  properties.set("object_name", myObjectBean.getObject_Name());
  properties.set("title", myObjectBean.getTitle()); 
  // omited for simplicity


  objectService.create(new DataPackage(dataObject), operationOptions);

 

In the above code you have to explicitly set individual attributes for the object, the more the number of attributes the more complex and messy code.

Take another Example, where you have to retrieve an Object information and pass it over to the UI layer.

 myObjectBean.setObject_name(properties.get("object_name").getValueAsString());
 myObjectBean.setTitle(properties.get("title").getValueAsString());
 myObjectBean.setMy_Custom_Property(properties.get("my_custom_property").getValueAsString());

This operation can be more complex if you decide to use match the Data Type of your bean with the Object type.

 

So what is the best approach to reducing this complexity? the answer is the effective use of reflection API.

Let’s take a step to step approach to handle this issue.

To understand this better consider the below as the attributes of mycustomobjecttype

 

Attribute Name Attribute Type
first_name String
last_name String
age integer
date_purchased time
amount_due double
local_buyer boolean

 

Java Bean

Create a Java Bean that matches the Object Type

 public class Mycustomobjecttype {
  protected String first_name ;
  protected String last_name  ;
  protected int age;
  protected Date date_purchased  ;
  protected double amount_due  ;
  protected boolean local_buyer ;
  public int getAge() {
    return age;
  }
  public void setAge(int age) {
    this.age = age;
  }
  public double getAmount_due() {
    return amount_due;
  }
  public void setAmount_due(double amount_due) {
    this.amount_due = amount_due;
  }
  public Date getDate_purchased() {
    return date_purchased;
  }
  public void setDate_purchased(Date date_purchased) {
    this.date_purchased = date_purchased;
  }
  public String getFirst_name() {
    return first_name;
  }
  public void setFirst_name(String first_name) {
    this.first_name = first_name;
  }
  public String getLast_name() {
    return last_name;
  }
  public void setLast_name(String last_name) {
    this.last_name = last_name;
  }
  public boolean isLocal_buyer() {
    return local_buyer;
  }
  public void setLocal_buyer(boolean local_buyer) {
    this.local_buyer = local_buyer;
  }
}

Getting the Values from PropertySet (Loading Java Bean)

……

List<DataObject> dataObjectList = dataPackage.getDataObjects();
DataObject dObject = dataObjectList.get(0);
Mycustomobjecttype myCustomObject = new Mycustomobjecttype();
populateBeanFromPropertySet(dObject.getProperties(),myCustomObject);

……

// See the Reflection in Action here 
public void populateBeanFromPropertySet(PropertySet propertySet, Object bean)
  throws Exception {
 BeanInfo beaninformation;
 beaninformation = Introspector.getBeanInfo(bean.getClass());
 PropertyDescriptor[] sourceDescriptors = beaninformation.getPropertyDescriptors();
 for (PropertyDescriptor descriptor : sourceDescriptors) {
     Object result = null;
     String name = descriptor.getName();
    if (!name.equals("class")) {
      if (propertySet.get(name) != null) {
        if (descriptor.getPropertyType().getName().equals("int")) {
          result = new Integer(propertySet.get(name)
              .getValueAsString());
        } else if (descriptor.getPropertyType().getName().equals("double")) {
          result = new Double(propertySet.get(name).getValueAsString());
         } else if (descriptor.getPropertyType().getName().equals("boolean")) {
          result = new Boolean(propertySet.get(name).getValueAsString());
         } else if (descriptor.getPropertyType().getName().equals("java.util.Date")) {
          DateProperty dat = (DateProperty)propertySet.get(name);
          result = dat.getValue();
        }else {
          // none of the other possible types, so assume it as String
          result = propertySet.get(name).getValueAsString();
        }
        if (result != null)
          descriptor.getWriteMethod().invoke(bean, result);
      }
     }
  }
}

Setting Values to Property Set

 

public DataPackage createContentLessObject(Mycustomobjecttype myCustomType) throws Exception {
ObjectIdentity objectIdentity = new ObjectIdentity("testRepositoryName");
DataObject dataObject = new DataObject(objectIdentity, myCustomType.getClass().getName());
PropertySet properties = populateProperties(myCustomType);
properties.set("object_name",myCustomType.getFirst_name()+myCustomType.getLast_name() );
dataObject.setProperties(properties);
DataPackage dataPackage = new DataPackage(dataObject);
OperationOptions operationOptions = new OperationOptions();
return objectService.create(dataPackage, operationOptions);
}

 

// Reflection in Action  
public PropertySet populateProperties(Object bean)throws Exception {
BeanInfo beaninfo;
PropertySet myPropertyset = new PropertySet();
beaninfo = Introspector.getBeanInfo(bean.getClass());  
PropertyDescriptor[] sourceDescriptors = beaninfo
      .getPropertyDescriptors();
  for (PropertyDescriptor descriptor : sourceDescriptors) {
    String propertyName = descriptor.getName();
    if (!propertyName.equals("class")) {
        // dont set read only attributes if any
       // example r_object_id 
       if (!propertyName.startsWith("r")) {
        Object value = descriptor.getReadMethod().invoke(bean);
       if (value != null) {
          myPropertyset.set(propertyName, value);
        }
      }
   }
 }
  return myPropertyset;
}

Chaining of Custom Services in DFS


 

There is an interesting drawback in Documentum Foundation Services Version 6.5,

Issue:

When you chain custom services and try to build the Services the build fails lets see a Scenario from the DFS sample code itself

@DfsPojoService(targetNamespace = http://common.samples.services.emc.com&#8221;, requiresAuthentication = true

) public class HelloWorldService

{

public String sayHello(String name)

{

ServiceFactory serviceFactory = ServiceFactory.getInstance();

IServiceContext context = ContextFactory.getInstance().getContext();

try {

IAcmeCustomService secondService = serviceFactory.getService(IAcmeCustomService.class, context);

secondService.testExceptionHandling();

} catch (ServiceInvocationException e) {

e.printStackTrace();

} catch (CustomException e) {

e.printStackTrace();

} catch (ServiceException e) {

e.printStackTrace();

}

return “Hello “ + name;

}

}

Here in the sample code of DFS I am chaining the services, Here everything looks fine and when you now you build this service during the genarateArtifacts ant task the Build will fail with a will get a ClassNotFound compiler error at

IAcmeCustomService secondService = serviceFactory.getService(IAcmeCustomService.class, context);

What happens here is when the build does the initial clean up all the generated Client interfaces are deleted and DFS currently not checking for any dependencies.

Let me take the example of dfs-build.xml that’s the part of CoreDocumentumProject in composer

<generateArtifacts serviceModel=“${gen.src.dir}/${context.root}-${module.name}-service-model.xml” destdir=“${gen.src.dir}/”>

<src location=“${src.dir}” />

<classpath>

<path refid=“projectclasspath.path” />

</classpath>

</generateArtifacts>

</target>

 

In this we cannot set any exclusion path in <src location=“${src.dir}” />

Simply because it even if you provide <fileset/> or <direst/> with pattern set its not recognizing it.

I had raised a support case with EMC and they told me that this is not currently supported!!!! And they will add this as a feature request

This means we cannot Chain Custom Services unless EMC fix this or we do a semi manual workaround to overcome this issue.

The Work-around that I found

Follow these steps to overcome this issue

Step 1,

Identify the Services those will call the custom services, and create a new source directory for it in composer, here I am calling them as depended_src and move the services that calls the custom services to there, the depended src should be in a separate path than the webservices- src

src-img1

Step 2

1) Now Edit the Build file and add these two properties

 

<property name=“my.core.services.classes” value=“${service.projectdir}/Web Services/bin/classes” />

 

<property name=“dep.src.dir” value=“${service.projectdir}/depended_src” />

The dep.src.dir should point to the depended src location mentioned in step 1

2) Create an additional target for generatemodel and generate artifacts

<target name=“generateDependencies” depends=“generate”>

<echo message=“Calling generateDependencies” />

<generateModel contextRoot=“${context.root}” moduleName=“${module.name}” destdir=“${gen.src.dir}/”>

<services>

<fileset dir=“${dep.src.dir}”>

<include name=“**/*.java” />

</fileset>

</services>

<classpath>

<pathelement location=“${my.core.services.classes}” />

<path refid=“projectclasspath.path” />

</classpath>

</generateModel>

<generateArtifacts serviceModel=“${gen.src.dir}/${context.root}-${module.name}-service-model.xml” destdir=“${gen.src.dir}/”>

<src location=“${dep.src.dir}” />

<classpath>

<pathelement location=“${my.core.services.classes}”/>

<path refid=“projectclasspath.path” />

</classpath>

</generateArtifacts>

<!– signal build is done –>

<!– used by DFSBuilder.java –>

<copy todir=“${src.dir}/../” file=“${basedir}/dfs-builddone.flag” />

</target>

3) Now edit dfs-build.properteis and add the following property

service.projectdir= <absolute path to the project>

Step 3

1) Run the generate task,

2) Copy all the service entries from (between <module> and </module><context-root>-<module-name>-service-model.xml you can find this in <project_dir>\Web Services\bin\gen-src folder

3) Now run the generateDependencies task that was created on Step 2

4) Now Edit <context-root>-<module-name>-service-model.xml and add the copied services to this file

5) If you want to create the jar files now you can call the package task after this.

This should help you to chain custom services , and if you found any alternate ways please comment.

 

Federation in Documentum


Federation is one among the most common distributed Documentum model. This means multiple Documentum repositories run as a federation. There will be a Governing repository and multiple member repositories in this model. Lets try to find out more about Federation

 

Take this typical scenario A Major Pharmaceutical Company ABC Corporation has multiple research centers and production plants across the glob and they have multiple Documentum repositories used for storing various information. A user logged into a corporate application needs to fetch documents from these various repositories in a Single session. Each repository in this scenario should have same set of users, groups and ACL for this architecture to work, manually managing these kind of scenario is trouble some and error prone.

 

Now lets see what a federation can do to make it less complex.

As I mentioned above Federations consists of Governing and Member repositories all the changes that has been made to global users and groups and external ACLS in the governing repository are automatically reproduced in the member Repository.

 

Requirements for Federation

·         Object types definition should be same in the all participating repositories.

·         User and group definition should be same in all participating repositories.

·         The server on which governing repository runs must project to the connection brokers at the servers where member repository runs

·         The server on which member repositories runs must project to the connection brokers at the servers where governing repository runs

·         If any of the participating Content Servers are with trusted server licenses Either
The servers should be configured to listen on both secure and native port or
The secure connection default for clients allows the clients to request a connection on a native or secure port

 

Few Bullet points about Federation

·         Any alteration done to any of the object type will not be automatically pushed to the participating repositories

·         Only users or groups marked as Global while creating them will be pushed / synchronized with participating repositories

·         The users those are part of any object types that are extended from dm_user will not automatically pushed. This will happen only if you specify this type in the Federation configuration.

·         Each repositories can be part of a single federation

·         A federation may contain different Content Server versions

·         A federation may contain a mix of trusted and non-trusted Content Servers.

 

 Download this Study Note (PDF)

Non Qualifiable properties and Property Bag in Documentum Objects


Before getting into details of Property Bags lets quickly see what are Qualifiable and Non-Qualifiable properties?

 

Qualifiable Properties

Most of the Object attributes are Qualifiable properties. The properties are qualifiable if it is saved (persisted) in a column of that Objects underlying tables (_r or _s table) in the Database. Attributes are Qualifiable by default.  I am not getting into much detail of Qualifiable properties here. See the following link for more on this

 

Object Attribute :- Repeated and Single Value Attributes in Database

 

Non – Qualifiable Properties

These Attributes of objects does not have column of its own in the Object’s underlying _r or s tables. They are saved in the serialized format in the i_property_bag property of the underlying object. See the below noted bullet points that reveal some interesting facts about Non – Qualifiable attributes.

 

·         Though these properties can be returned using a DQL Query, this cannot be used in the Qualifying clause [in Where Clause of the Query]. The Exception to this rule is that Non-Qualifiable properties can be used in the where clause of FTQL queries

·         These properties can be full text indexed.

·         IF the Non-Qualifiable properties are part of the select part of the Query, the query should have r_object_id in the select list.

·         If a property is Non-Qualifiable and it is of type String the length of that attribute must be less than the value of max_nqa_string key in the server.ini (Only if this key has a value set in server.ini) (The default value is 2000)

 

 

Lets See Few Related DQL’s

 

The following DQL creates an object type mycustomtype with first name a Qualifiable Attribute and country a Single Non-Qualifiable attribute and phone a Non Qualifiable Repating attribute.

 

CREATE TYPE “mycustomtype” (“firstname” string(64), country string(64) NOT QUALIFIABLE, phone string(10) REPEATING NOT QUALIFIABLE ) WITH SUPERTYPE “dm_document”

 

The following Query will create an object of type mycustomtype (You may notice that there is no difference in the create Query when compared with Qualifiable Properties

 

CREATE mycustomype OBJECT SET “firstname” = ‘Hello World’, SET “country”= ‘US’, SET phone [0]= ‘1111111111’

 

The following Query will return the Attributes from mycustomtype.
Note:
Make sure that you have r_object_id in the select query if you have non-Qualifiable attributes otherwise you will get following error
DM_QUERY2_E_MISSING_OBJECTID_IN_SELECT_LIST

 

Select r_object_id, firstname, country from mycustomype;

 

What is Property Bag

Property bag is a relatively new term in Documentum. This is a property, which is used to store other properties and values of an Object.  Non Qualifiable properties and its values of an Object is stored in its property bag (both Single and Repeated)

Other than Non-Qualifiable properties Property bag can also hold Properties and values of Aspect if the aspect properties are enabled with OPTIMIZEFETCH.

 

i_property_bag and r_property_bag

 

The i_property_bag property is defined in dm_sysobject , This Attribute is of datatype String and  can hold up to 2000 characters. This makes sure that all the object types that are inheriting from dm_sysobject have the ability to have its own property bag.

 

In the scenarios where you don’t have a Parent type defined in the Object definition and you create a non-Qualifiable attribute this property is automatically added to the object type.

 

If you add property bag to an object type it can never be removed from that object type.

 

The r_property_bag is silently added to the Object’s type definition when we add i_property_bag. This repeating attribute property is used to store the overflow from the i_property_bag. That means if the names and values of properties stored in i_property_bag exceed 2000 chars, the overflow is stored in r_property_bag. This is repeating string property with 2000 characters.

 

Download This Study Note (PDF)