7 Matching Annotations
  1. Nov 2020
    1. I enlisted its help in drafting myself a new biography, infused with the spirit of Star Wars hero Luke Skywalker.

      ||JovanNj|| How is this done?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  2. Sep 2020
    1. Here is a summary of the article on GPT-3 published in Guardian:

      • GPT-3 identifies patterns and generate sentences.
      • humans are in control
      • discarded 90% of text generated by GPT-3
      • humans had to write prompt and edit generated text.

      ||Jovan||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  3. Jul 2020
    1. GPT-2 was described as a “chameleon-like” synthetic text generator, but it wasn’t state-of-the-art in downstream tasks like question answering, summarization, or translation.

      It is an interesting point of difference between text generation, question answering, summarization or translation? Here, we can discuss a potential usability of GPT-2 ||JovanNj||||djordjej||||NatasaPerucica||||Katarina_An||

    2. GPT achieved state-of-the-art in language tasks by pairing supervised learning with unsupervised pre-training (or using the parameters from an unsupervised step as a starting point for the supervised step).

      How can GPT perform supervised and unsupervised pre-training? How it can work practiclally? ||JovanNj||||Jovan||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  4. Jun 2020
    1. Given an example of a task, such as an incomplete sentence, and then the completed sentence, GPT-3 will proceed to complete any incomplete sentence it's given. 

      This could be an interesting task for us ||djordjej||

    2. GPT-3 is the triumph of an over-arching generality. Just feed it an enormous amount of text till its weights are ideal, and it can go on to perform pretty well on a number of specific tasks with no further development.

      out of the box solution, reduce need for fine tuning

    3. A more fundamental limitation of the general approach described in this paper – scaling up any LM-like model, whether autoregressive or bidirectional – is that it may eventually run into (or could already be running into) the limits of the pretraining objective,

      The limit of the approach - regardless of the processing power needed (which is huge), this is not going to scale up anymore. ||Jovan|| ||NatasaPerucica|| ||Katarina_An||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL