bert
88dddaf430b5bc38ab8228902bb61821-Supplemental-Conference.pdf
Supplementary figure 1. Ablanullon study, each row represents the ablated layer and each column the module that is ablated from that layer, for example the first panel shows ablanullon of anullennullon - key in layer 5. Different layers in GPT2 - XL model were ablated and the consequence of ablanullon on curvature measured for 2000 sentences in UD corpus. Red bar shows the layer where ablanullon was applied. AB Supplementary figure 3. A. curvature values for sampled 2000 sentence in RWKV model ( RNN) for both trained an untrained version. B correlanullon between model generated surprisal and curvature in RWKV model. Diamonds: syntacnullc surprisal Supplementary figure 5: E ffect of different decoding strategies in GPT2 - XL sequence generanullon and its comparison to ground - truth(true) same as figure 4b in the main manuscript.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > Canada (0.04)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.28)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (7 more...)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.05)
- North America > Canada > Quebec > Montreal (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Asia > Middle East > Qatar > Ad-Dawhah > Doha (0.04)
- North America > Dominican Republic (0.04)
- North America > United States > Iowa (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
3e9f0fc9b2f89e043bc6233994dfcf76-AuthorFeedback.pdf
Weappreciate this point and will revisit the word choice. What is given to the turkers? We will provide the full prompt in revision along with other details (we used 327 annotators) and discussion. For overall trustworthiness for instance, we asked "Does the article read like it comes28 from a trustworthy source?" Nevertheless, BERT is worse at neural fake news discrimination compared with Grover.