11,016 Matching Annotations
  1. May 2023
    1. First, different rules for different risks. The strongest regulation should be applied to use cases with the greatest risks to people and society. Second, clearly defining risks. There must be clear guidance on AI uses or categories of AI supported activity that are inherently high risk. This common definition is key to enabling a clear understanding of what regulatory requirements will apply in different use cases and contexts. Third, be transparent. So AI shouldn’t be hidden. Consumers should know when they’re interacting with an AI system and that they have recourse to engage with a real person should they so desire. No person anywhere should be tricked into interacting with an AI system. And finally, showing the impact. For higher risk use cases, companies should be required to conduct impact assessments that show how their systems perform against tests for bias and other ways that they could potentially impact the public. And to attest that they’ve done so by following risk-based use case-specific approach.

      Q: What are 4 elements of precision regulation as proposed by IBM?

    2. a precision regulation

      Language

      Precision regulation is another concept to follow.

    3. a threshold of capabilities

      What is 'a threashold'. As always devil is in detail.

    4. We believe that the benefits of the tools we have deployed so far vastly outweigh the risks,

      Balancing narrative Opportunities 80 - Risks 20

    5. be My Eyes, used our new multimodal technology in GPT-4 to help visually impaired individuals navigate their environment.

      Optimistic narrative

    6. We think it can be a printing press moment.

      Paradigm shift narrative

    7. But the basic question we face is whether or not this issue of AI is a quantitative change in technology or a qualitative change.

      Critical question. It is quantiative shift which will evolve into qualitative one.

    8. We had four bills initially that were considered by this committee and what may be history in the making. We passed all four bills with unanimous roll calls, unanimous roll calls. I can’t remember another time when we’ve done that.

      Child safety online is one of the rare issues that unite all political forces worldwide. Will it be extended to AI?

    9. will we strike that balance between technological innovation and our ethical and moral responsibility to humanity, to liberty, to the freedom of this country?

      Balance narrative Choice narrative

    10. I was reminded of the psychologist and writer Carl Jung, who said at the beginning of the last century that our ability for technological innovation, our capacity for technological revolution, had far outpaced our ethical and moral ability to apply and harness the technology we developed.

      A good reminder of Jung's work. It is on the line of Frankenstein's warnings of Mary Shelly.

      ||Jovan||

    11. is it gonna be more like the atom bomb, huge technological breakthrough, but the consequences severe, terrible, continue to haunt us to this day

      Analogy - atomic bomb

    12. Is it gonna be like the printing press that diffused knowledge, power, and learning widely across the landscape that empowered, ordinary, everyday individuals that led to greater flourishing, that led above all two greater liberty?

      Analogy with Printing press

    13. We should not repeat our past mistakes, for example, Section 230

      Acknowledging msitake with Section 230.

    14. scorecards and nutrition labels

      Can this analogy work?

    15. known risks

      The real problem is in 'known'. We can deal with known knowns. (un)known unknowns are major problem.

    16. transparency

      AI principles

    17. Now we have the obligation to do it on AI before the threats and the risks become real. Sensible safeguards are not in opposition to innovation.
    18. we may need something like CERN, global, international, and neutral, but focused  on AI safety, rather than high-energy physics.  

      CERN for AI

    19. Chatbots can clandestinely  shape our opinions, in subtle yet potent ways, potentially exceeding what social media can do.  Choices about datasets may have enormous, unseen influence. 
    20. I call datocracy, the opposite of democracy:
    21. These  guardrails should be matched with meaningful steps by the business community  to do their part.
    22. an AI Ethics Board
    23. a lead AI ethics official
    24. on regulatory guardrails
    25. A risk based approach ensures that guardrails for AI apply to any application, even as  this new, potentially unforeseen developments in the technology occur, and that  those responsible for causing harm are held to account.
    26. a “precision regulation” approach to  artificial intelligence. This means establishing rules to govern the deployment of AI  in specific use-cases, not regulating the technology itself. 

      This is a new concept a 'precision regulation'

    27. international cooperation on AI safety, including examining potential intergovernmental oversight mechanisms and standard-setting. 

      Call for intergovernmental oversight

    28. safety standards, evaluation requirements, disclosure practices, and external validation mechanisms for AI systems subject to license or registration. 
    29. a combination of licensing or registration requirements
    30. adhere to an appropriate set of safety requirements,
    31. regulation of AI is essential,
    32. to minimize any harmful effects for workers and businesses.

      How?

    33. the Alignment Research Center (ARC)
    34. a Cybersecurity Working Group
    35. a Cybersecurity Grant Program
    36. novel security controls to help protect core model intellectual property.

      Security and intellectual property.

    37. We will continue to explore partnerships with industry and researchers, as well as with governments, that encompass the full disinformation lifecycle. 
    38. takes a whole-of-society approach

      also widening responsibility to whole society.

    39. we recognize that there is more work to do to educate users about the limitations of AI tools, and to reduce the likelihood of inaccuracy.
    40. Thorn’s Safer37 service
    41. to generate hateful, harassing, violent or adult content, among other categories,

      Prohibited content.

    42. Any ChatGPT user can opt-out of having their conversations be used to improve our models.33 Users can delete their accounts,34 delete specific conversations from the history sidebar, and disable their chat history at any time.35 

      Fair points if it is the case in practice.

    43. via our API

      What about other uses?

    44. for the purposes of advertising, promoting our services, or selling data to third parties

      What about other purposes

    45. Iterative deployment

      'Iterative deployment' since to be the keyword. Can 'agile approach' be applied to policy and law. Is it transferable from technology sector?

    46. people and our institutions need time to update and adjust to increasingly capable AI, and that everyone who is affected by this technology should have a significant say in how AI develops further.

      Is this 'passing responsibility' for products to citizens and government?

    47. unsafe content

      What is 'unsafe content'?

    48. including but not limited to, generation of violent content, malware, fraudulent activity, high-volume political campaigning, and many other unwelcome areas.

      This could be part of content which is prohibited.

    49. disallowed content

      Who decides what is 'disallowed content'? Is there any list of this type of content provided by OpenAI?

    50. These opportunities are why former U.S. Treasury Secretary Lawrence Summers has said that AI tools such as ChatGPT might be as impactful as the printing press, electricity, or even the wheel or fire.

      It is a good example of using authority argument. This quote about digital as printing press, electricity, wheel or fire is probably the most frequently mentioned in any discussion on impact of digital technology on society.

      In this case Lawrence Summers is quoted because of his authority in the US political/academic establishment.

    51. Microsoft is an important investor in OpenAI, and we value their unique alignment with our values and long-term vision, including their shared commitment to building AI systems and products that are trustworthy and safe.

      Why would Microsoft invest if they do not have any privileged position in OpenAI. For example, do they have privileged access to GPT?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. “What I keep emphasizing to people is to just start using this,” says Mollick. As workers get increasing fluent, he adds, they can find themselves ahead of the curve, and at a distinct advantage in the workplace. Workers resistant to AI could be seen as unwilling or incapable of adapting, says Frey. “I think workers that don't work with AI are going to find their skills [become] obsolete quite rapidly. So, therefore, it's imperative to work with AI to stay employed, stay productive and have up to date skills.”
    2. increasing the demand for jobs including data analysts and scientists who work with the technology to create best practices in the workplace.
    3. might be able to identify a confirmation bias in their work, meaning they look for evidence to support an outcome they already believe exists
    4. functions as a sounding board – a tool to bounce ideas off,
    5. In his own field of academia, for instance, he’s seen it test for counterarguments to a thesis, and write an abstract for research. “You can ask it to generate a tweet to promote your paper,” he adds. “There are tremendous possibilities.” For knowledge workers, this could mean creating an outline for a blog and a social media post to go with it, distil complex topics for a target audience

      AI for knowledge workers

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. How To Evaluate a Chatbot?

      It is critical issue - evaluation.

    2. To ensure data quality, we convert the HTML back to markdown and filter out some inappropriate or low-quality samples.

      ||JovanNj||||anjadjATdiplomacy.edu|| Is this a solution for our markup with TExtus?

    3. Vicuna is created by fine-tuning a LLaMA base model using approximately 70K user-shared conversations gathered from ShareGPT.com with public APIs.
    4. Inspired by the Meta LLaMA and Stanford Alpaca project, we introduce Vicuna-13B, an open-source chatbot backed by an enhanced dataset and an easy-to-use, scalable infrastructure.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Cerebras (not to be confused with our own Cerebra) trains the GPT-3 architecture using the optimal compute schedule implied by Chinchilla, and the optimal scaling implied by μ-parameterization. This outperforms existing GPT-3 clones by a wide margin, and represents the first confirmed use of μ-parameterization “in the wild”. These models are trained from scratch, meaning the community is no longer dependent on LLaMA.

      ||JovanNj||||anjadjATdiplomacy.edu|| It is a very interesting development. They do not use any more even LLaMA. How is it possible?

    2. LoRa and ControlNet (not to mention in- and outpainting), there’s a clear benefit to letting people go wild with your tech.
    3. allowing the model to better understand and use context.
    4. We explore two other obvious sources of basis dependency in a Transformer: Layer normalization, and finite-precision floating-point calculations. We confidently rule these out as being the source of the observed basis-alignment.
    5. Alpaca AI is open source and around the same performance as gpt3. https://github.com/tatsu-lab/stanford_alpaca

      ||JovanNj||||anjadjATdiplomacy.edu|| Is Alpaca AI useful model?

    6. Look into Modular, they have an interesting platform for AI development.
    7. GPT 4 cost well over $100 million to train alone, $700k to run per day.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. we focus on collecting a small high-quality dataset.
    2. built using significant amounts of human annotation.
    3. Our results suggest that learning from high-quality datasets can mitigate some of the shortcomings of smaller models, maybe even matching the capabilities of large closed-source models in the future.

      Chance for smaller models ||sorina||||VladaR||||JovanNj||||anjadjATdiplomacy.edu||

    4. will the future see increasingly more consolidation around a handful of closed-source models, or the growth of open models with smaller architectures that approach the performance of their larger but closed-source cousins?
    5. This suggests that in the future, highly capable LLMs will be largely controlled by a small number of organizations, and both users and researchers will pay to interact with these models without direct access to modify and improve them on their own.
    6. the community should put more effort into curating high-quality datasets
    7. it suggests that models that are small enough to be run locally can capture much of the performance of their larger cousins if trained on carefully sourced data
    8. large closed-source models to smaller public models
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Google "We Have No Moat, And Neither Does OpenAI"

      ||sorina||||VladaR|| Small data models are becoming reality. The key is to have high quality data which we can provide with textus.

    2. Open source alternatives can and will eventually eclipse them unless they change their stance. In this respect, at least, we can make the first move.
    3. This necessarily means relinquishing some control over our models. But this compromise is inevitable. We cannot hope to both drive innovation and control it.
    4. But this control is a fiction. Anyone seeking to use LLMs for unsanctioned purposes can simply take their pick of the freely available models.
    5. By owning the platform where innovation happens, Google cements itself as a thought leader and direction-setter, earning the ability to shape the narrative on ideas that are larger than itself.
    6. These models are used and created by people who are deeply immersed in their particular subgenre, lending a depth of knowledge and empathy we cannot hope to match.
    7. But holding on to a competitive advantage in technology becomes even harder now that cutting edge research in LLMs is affordable.
    8. he best are already largely indistinguishable from ChatGPT.
    9. This means that as new and better datasets and tasks become available, the model can be cheaply kept up to date, without ever having to pay the cost of a full run.
    10. Being able to personalize a language model in a few hours on consumer hardware is a big deal, particularly for aspirations that involve incorporating new and diverse knowledge in near real-time.
    11. called low rank adaptation, or LoRA

      ||JovanNj||||anjadjATdiplomacy.edu|| Is this something we should use?

    12. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.
    13. Giant models are slowing us down.
    14. Open-source models are faster, more customizable, more private, and pound-for-pound more capable.
    15. Scalable Personal AI: You can finetune a personalized AI on your laptop in an evening.

      ||JovanNj||||anjadjATdiplomacy.edu|| Is it possible to have personalised AI in an evening.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. he world and preparing for change rather than trying to roll history back
    2. The author, Ganesh Sitaraman, calls for an American “grand strategy of resilience.”(link is external)  
    3. it will be the American responsibility to listen more and collaborate more seriously to ensure broadly supported recommendations. 
    4. its resilience and adaptability.
    5. If we had looked forward decades ago to understand and accept what was going to be happening today in front of our eyes, we would have had time to better manage the adjustment and implement the remedies.
    6. “Tragedy and Mobilization”, the world confronts the cumulative effects of unmanaged climate change. 
    7. In “Separate Silos” the world fails to manage a co-existence model and the global order devolves into regional power blocs - the U.S, the European Union, Japan, Korea, Australia, Russia, China, India and some rising states - focused on self-sufficiency. 
    8. The “Competitive Coexistence” scenario is less dangerous primarily because both the U.S. and China make economic growth a priority and to some extent achieve co-dependency on maintaining a stable global order.
    9. “A World Adrift” scenario, the international system breaks down as the rules and institutions of today’s structures are little used by the major powers, regional states and non-state actors.
    10. The scale of transnational challenges, and the emerging implications of fragmentation, are exceeding the capacity(link is external) of existing systems and structures . . .”
    11. Go global.
    12. Bring in more participants from outside the government in working group formats - private enterprise, institutions, and other stakeholders to start discussions earlier than they might otherwise and to speed up the policy formation process.
    13. Re-establish the diplomacy and science career track at the State Department
    14. Tighten the bond between science and diplomacy. 
    15. the research and policy dimension within the U.S. government five to ten years ahead.
    16. Now soft power has a new role to play, not merely as a cultural tool, but as a science and technology avenue of influence.

      Science as part of soft power influence.

    17. the ability to guide outcomes with culture, the sciences and by the power of our example – has receded. 
    18. Overemphasis on the  bilateral model of American diplomacy does not provide the best process for dealing with modern large scale over-the-horizon issues.
    19. new ideas are often confronted by old thinking, passive resistance and a wait-it-out state of mind.
    20. has established a policy ideas channel to inspire new views from within and outside the State Department to challenge “groupthink(link is external)
    21. “Bringing America’s Multilateral Diplomacy into the 21st Century(link is external)”,
    22. The UN is the depository of more than 560 multilateral treaties(link is external)
    23. The Law of the Sea Treaty of 1982
    24. The Outer Space Treaty of 1967
    25. Nuclear and non-nuclear weapons limitation treaties
    26. The Antarctic Treaty of 1959
    27. None is perfect; all treaties have to keep up with the times in order to survive, and all treaties leave some gaps to be solved later. 
    28. The event is a call to action on two fronts: ensuring the safe use of near-Earth orbit and dealing with the dangerous escalation of anti-satellite technology(link is external).
    29. Seeing America pulling back from the world and divided, China announced its plans to replace the United States as the most powerful nation on earth. 
    30. resentment of the middle class.
    31. Young Americans who came of age in the 2000’s have never known a time of peace and tranquility. 
    32. five elements are required: (1) involvement of all the essential stakeholders (those that could make or break an agreement), (2) consensus definition of the problem, (3) sufficient common interests to generate a productive dialogue, (4) a shared commitment by the stakeholders to finding a solution, and (5) successful post-agreement implementation that stands the test of time.
    33. The lack of a genuine partnership between the worlds of science and diplomacy to integrate multidisciplinary subject-matter
    34. Today our diplomats are not trained in the scientific aspects of dealing with issues of global health, climate change, energy renewal, cyber threats, food and water resources, regional or global supply chains and outer space among others.
    35. No global issue of significance today or for the foreseeable future will be solely national - allocating more of the diplomatic circle graph to the multilateral slice is both in our interest and more likely than ever to be the methodology of the future.
    36. to engage with the larger issues looming just over the horizon.
    37. is consumed with managing the moment, the immediate.
    38. by engaging with others at the “early stages” of issues to keep us on the front lines of managing global trends.
    39. one that recognizes the changing nature of the challenges we will inevitably face in the future – not just the problems we face now.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The Luddites were not anti-technology; what they wanted was economic justice.
    2. A.I. researchers are increasing the concentration of wealth to such extreme levels that the only way to avoid societal collapse is for the government to step in.
    3. the only way to make things better is to make things worse.
    4. What Žižek advocated for is an example of an idea in political philosophy known as accelerationism
    5. it seems like a way for the people developing A.I. to pass the buck to the government.
    6. A.I. assists capital at the expense of labor.
    7. by hiring consultants, management can say that they were just following independent, expert advice.
    8. we rely on metaphor,
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. it triggers a mental shortcut in the minds of users that we call a “machine heuristic.” This shortcut is the belief that machines are accurate, objective, unbiased, infallible and so on.

      An interesting conceput of machine heuristic.

    2. Copyleft licensing allows for content to be used, reused or modified easily under the terms of a license – for example, open-source software.
    3. But with self-driving cars, the engineers can never be sure how it will perform in novel situations.
    4. that people treat computers as social beings when the machines show even the slightest hint of humanness, such as the use of conversational language.
    5. Such beliefs build on “automation bias” or the tendency to let your guard down when machines are performing a task
    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. However, if not coupled with resources and pressure to actually perform this sector-specific adaptation, this approach runs the risk of resulting in no meaningfully binding regulation at all.
    2. a unique blend of horizontal and vertical elements
    3. sectoral regulators will be outmatched in their efforts to meaningfully constrain businesses applying AI.
    4. If agencies do not coordinate to build common regulatory tools, they risk reinventing the wheel each time a new department is tasked with regulating a specific application of AI.
    5. if industry-dominated standards bodies set weak standards for compliance, the regulation itself becomes weak.
    6. Their legislative bodies will need the ability to amend or add to its main horizontal regulation in order to keep pace with the technology.
    7. the heavy lifting of articulating specific compliance thresholds will be done by Europe’s main standardization bodies

      ||sorina|| Here is again focus on standardisation bodies.

    8. the EU’s broad approach will stifle innovation, and analysts correctly assert that China’s targeted regulations will be used to tighten information controls.
    9. they effectively function to shift power from technology companies to government regulators
    10. such as their sources for training data and potential security risks.
    11. the algorithm registry
    12. disseminating information, as well as setting prices and dispatching workers.
    13. has taken a fundamentally vertical approach: picking specific algorithm applications and writing regulations that address their deployment in certain contexts.
    14. the standards process has historically been driven by industry, and it will be a challenge to ensure governments and the public have a meaningful seat at the table.
    15. Another factor is whether the proposed central and horizontal European AI Office will be effective in supplementing the capacity of national and sectoral regulators.
    16. risk-based approach
    17. the dual imperatives of providing predictability and keeping pace with AI developments.
    18. the requirements in the AI Act with co-regulatory strategies
    19. The easiest way for developers to satisfy these mandates will be for them to adhere to technical standards that are being formulated by European standards-setting bodies.

      ||sorina|| Here is an explanation why EU is focusing a lot on technical standards. They are likely to play critical role in AI regulation.

    20. Applications deemed to pose an “unacceptable risk” (such as social scoring and certain types of biometrics) are banned. “High risk” applications that pose a threat to safety or fundamental rights (think law enforcement or hiring procedures) are subject to certain pre- and post-market requirements. Applications seen as “limited risk” (emotion detection and chatbots, for instance) face only transparency requirements. The majority of AI uses are classified as “minimal risk” and subject only to voluntary measures.

      Four level of risks in the EU AI regulation.

    21. In a horizontal approach, regulators create one comprehensive regulation that covers the many impacts AI can have. In a vertical strategy, policymakers take a bespoke approach, creating different regulations to target different applications or types of AI.

      ||sorina|| Here is an interesting distinction between horistonal (EU) and vertical (China) approaches to AI regulation.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. in July last year that about 40% of young people turn to TikTok or Meta Platforms-owned Instagram when searching for restaurants, citing an internal study.
    2. Bing’s share of the search market has remained below 3%
    3. to ask follow-up questions or swipe through visuals such as TikTok videos in response to their queries.

      There will be video answers to quesitons.

    4. an advertising business that made more than $162 billion in revenue last year.
    5. It plans to incorporate more human voices as part of the shift, supporting content creators in the same way it has historically done with websites, the documents say.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The hardware, hilariously, became very easy to procure because of the recent crypto mining boom and bust. Several companies offering AI hardware in the cloud were actually crypto mining operations until recently.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The people who are already well versed in something are going to be the ones capable of making the most helpful applications for that particular field or industry.

      ||VladaR|| This is our main advantage which we should activate via cognitive proximity. We know what we are talking about and we know how to use AI.

    2. arent there already LLM models that cite their sources? or I heard that new plugin with chat GPT can cite its sources

      ||JovanNj|| Are there models that can cite sources?

    3. The general consensus is that, especially customer facing automation, MUST be "explainable." Meaning whenever a chat bot or autonomous system writes something or makes a decision, we have to know exactly how and why it came to that conclusion.

      explainability is critical

    4. They are caught up in the hype and just like everyone else have zero clue what's actually going to happen.

      narrative

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Across the global South, the conflict in Ukraine is seen largely as a European affair, one without an obvious hero or villain.
    2. The overwhelming majority, 720 million, lived in Latin America, Africa, and Asia. More than 400 million lived in Latin America alone. By 2025, only one in five Catholics will be a non-Hispanic Caucasian.
    3. the Holy See has practiced what academics call the “great power” model of diplomacy, attaching itself to the superpower of the day.
    4. More Catholics than ever before live outside the West and don’t see the war in Ukraine on the same terms as Europe and the United States do.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. the Stanford Politeness Dataset,
    2. ensuring the LLM produces reliable outputs for a particular business use-case often requires additional training on actual data from this domain labeled with the desired outputs.

      Importance of additional training and data labelling.

    3. a fine-tuned Large Language Model (LLM; a.k.a. Foundation Model).
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Open source alternatives can and will eventually eclipse them unless they change their stance. In this respect, at least, we can make the first move.
    2. Research institutions all over the world are building on each other’s work, exploring the solution space in a breadth-first way that far outstrips our own capacity. We can try to hold tightly to our secrets while outside innovation dilutes their value, or we can try to learn from each other.
    3. Part of what makes LoRA so effective is that—like other forms of fine-tuning—it’s stackable. Improvements like instruction tuning can be applied and then leveraged as other contributors add on dialogue, or reasoning, or tool use. While the individual fine tunings are low rank, their sum need not be, allowing full-rank updates to the model to accumulate over time.
    4. calling this the "Stable Diffusion moment" for LLMs.
    5. their efforts are rapidly being eclipsed by the work happening in the open source community.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The Vedic principles have influenced the Indian psyche for many centuries. These principles have underpinned the socio-cultural-religious framework for the development of individual and social moral principles. The Indian psyche and society is in a phase of rapid evolution. Pursuit of Artha and Kama are overtaking the responsibility of Dharma.
    2. Dharma embraces every type of righteous conduct, covering every aspect of life, both religious and secular, that is essential for the sustenance and welfare of the individual, society and creation.
    3. Ontological nature of existence and Dharma (which approximately translates into morality).
    4. These teachings comprehensively bring out the essence of Vedas, primarily Upanishads, in a language that is less terse than that in the original Upanishads
    5. It is then that Lord Krishna enlightens him through the teachings that together form the Bhagavad Gita.
    6. It forms a part of the great epic, Mahabharata which is traditionally ascribed to the sage Vyasa.
    7. The Bhagavad Gita
    8. set of dialogues
    9. Upanishads are passages from the Jnana-kanda section of the Vedas. They are the core of Vedic wisdom and are essentially philosophical in nature
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL