nonetrix commited on
Commit
44f3267
1 Parent(s): e0a6b3e

Add ComfyUI files

Browse files
Files changed (1) hide show
  1. mega-merge.json +1725 -0
mega-merge.json ADDED
@@ -0,0 +1,1725 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "last_node_id": 31,
3
+ "last_link_id": 35,
4
+ "nodes": [
5
+ {
6
+ "id": 1,
7
+ "type": "CheckpointLoaderSimple",
8
+ "pos": [
9
+ 395,
10
+ 222
11
+ ],
12
+ "size": {
13
+ "0": 315,
14
+ "1": 98
15
+ },
16
+ "flags": {},
17
+ "order": 0,
18
+ "mode": 0,
19
+ "outputs": [
20
+ {
21
+ "name": "MODEL",
22
+ "type": "MODEL",
23
+ "links": [
24
+ 1
25
+ ],
26
+ "shape": 3,
27
+ "tooltip": "The model used for denoising latents.",
28
+ "slot_index": 0
29
+ },
30
+ {
31
+ "name": "CLIP",
32
+ "type": "CLIP",
33
+ "links": null,
34
+ "shape": 3,
35
+ "tooltip": "The CLIP model used for encoding text prompts."
36
+ },
37
+ {
38
+ "name": "VAE",
39
+ "type": "VAE",
40
+ "links": null,
41
+ "shape": 3,
42
+ "tooltip": "The VAE model used for encoding and decoding images to and from latent space."
43
+ }
44
+ ],
45
+ "properties": {
46
+ "Node name for S&R": "CheckpointLoaderSimple"
47
+ },
48
+ "widgets_values": [
49
+ "PVCStyleModelMovable_pony160.safetensors"
50
+ ]
51
+ },
52
+ {
53
+ "id": 2,
54
+ "type": "CheckpointLoaderSimple",
55
+ "pos": [
56
+ 391,
57
+ 397
58
+ ],
59
+ "size": {
60
+ "0": 315,
61
+ "1": 98
62
+ },
63
+ "flags": {},
64
+ "order": 1,
65
+ "mode": 0,
66
+ "outputs": [
67
+ {
68
+ "name": "MODEL",
69
+ "type": "MODEL",
70
+ "links": [
71
+ 2
72
+ ],
73
+ "shape": 3,
74
+ "tooltip": "The model used for denoising latents.",
75
+ "slot_index": 0
76
+ },
77
+ {
78
+ "name": "CLIP",
79
+ "type": "CLIP",
80
+ "links": null,
81
+ "shape": 3,
82
+ "tooltip": "The CLIP model used for encoding text prompts."
83
+ },
84
+ {
85
+ "name": "VAE",
86
+ "type": "VAE",
87
+ "links": null,
88
+ "shape": 3,
89
+ "tooltip": "The VAE model used for encoding and decoding images to and from latent space."
90
+ }
91
+ ],
92
+ "properties": {
93
+ "Node name for S&R": "CheckpointLoaderSimple"
94
+ },
95
+ "widgets_values": [
96
+ "accretiondiscxl_v10.safetensors"
97
+ ]
98
+ },
99
+ {
100
+ "id": 3,
101
+ "type": "ModelMergeBlocks",
102
+ "pos": [
103
+ 823,
104
+ 284
105
+ ],
106
+ "size": {
107
+ "0": 315,
108
+ "1": 126
109
+ },
110
+ "flags": {},
111
+ "order": 15,
112
+ "mode": 0,
113
+ "inputs": [
114
+ {
115
+ "name": "model1",
116
+ "type": "MODEL",
117
+ "link": 1
118
+ },
119
+ {
120
+ "name": "model2",
121
+ "type": "MODEL",
122
+ "link": 2
123
+ }
124
+ ],
125
+ "outputs": [
126
+ {
127
+ "name": "MODEL",
128
+ "type": "MODEL",
129
+ "links": [
130
+ 4
131
+ ],
132
+ "shape": 3,
133
+ "slot_index": 0
134
+ }
135
+ ],
136
+ "properties": {
137
+ "Node name for S&R": "ModelMergeBlocks"
138
+ },
139
+ "widgets_values": [
140
+ 0.71,
141
+ 0.67,
142
+ 0.75
143
+ ]
144
+ },
145
+ {
146
+ "id": 4,
147
+ "type": "ModelMergeBlocks",
148
+ "pos": [
149
+ 1208,
150
+ 284
151
+ ],
152
+ "size": {
153
+ "0": 315,
154
+ "1": 126
155
+ },
156
+ "flags": {},
157
+ "order": 16,
158
+ "mode": 0,
159
+ "inputs": [
160
+ {
161
+ "name": "model1",
162
+ "type": "MODEL",
163
+ "link": 3
164
+ },
165
+ {
166
+ "name": "model2",
167
+ "type": "MODEL",
168
+ "link": 4
169
+ }
170
+ ],
171
+ "outputs": [
172
+ {
173
+ "name": "MODEL",
174
+ "type": "MODEL",
175
+ "links": [
176
+ 5
177
+ ],
178
+ "shape": 3,
179
+ "slot_index": 0
180
+ }
181
+ ],
182
+ "properties": {
183
+ "Node name for S&R": "ModelMergeBlocks"
184
+ },
185
+ "widgets_values": [
186
+ 0.58,
187
+ 0.5,
188
+ 0.6
189
+ ]
190
+ },
191
+ {
192
+ "id": 6,
193
+ "type": "CheckpointLoaderSimple",
194
+ "pos": [
195
+ 1544,
196
+ 519
197
+ ],
198
+ "size": {
199
+ "0": 315,
200
+ "1": 98
201
+ },
202
+ "flags": {},
203
+ "order": 2,
204
+ "mode": 0,
205
+ "outputs": [
206
+ {
207
+ "name": "MODEL",
208
+ "type": "MODEL",
209
+ "links": [
210
+ 6
211
+ ],
212
+ "shape": 3,
213
+ "tooltip": "The model used for denoising latents.",
214
+ "slot_index": 0
215
+ },
216
+ {
217
+ "name": "CLIP",
218
+ "type": "CLIP",
219
+ "links": null,
220
+ "shape": 3,
221
+ "tooltip": "The CLIP model used for encoding text prompts."
222
+ },
223
+ {
224
+ "name": "VAE",
225
+ "type": "VAE",
226
+ "links": null,
227
+ "shape": 3,
228
+ "tooltip": "The VAE model used for encoding and decoding images to and from latent space."
229
+ }
230
+ ],
231
+ "properties": {
232
+ "Node name for S&R": "CheckpointLoaderSimple"
233
+ },
234
+ "widgets_values": [
235
+ "bambooShootMixMix_v10.safetensors"
236
+ ]
237
+ },
238
+ {
239
+ "id": 7,
240
+ "type": "ModelMergeBlocks",
241
+ "pos": [
242
+ 1613,
243
+ 280
244
+ ],
245
+ "size": {
246
+ "0": 315,
247
+ "1": 126
248
+ },
249
+ "flags": {},
250
+ "order": 17,
251
+ "mode": 0,
252
+ "inputs": [
253
+ {
254
+ "name": "model1",
255
+ "type": "MODEL",
256
+ "link": 5
257
+ },
258
+ {
259
+ "name": "model2",
260
+ "type": "MODEL",
261
+ "link": 6
262
+ }
263
+ ],
264
+ "outputs": [
265
+ {
266
+ "name": "MODEL",
267
+ "type": "MODEL",
268
+ "links": [
269
+ 7
270
+ ],
271
+ "shape": 3,
272
+ "slot_index": 0
273
+ }
274
+ ],
275
+ "properties": {
276
+ "Node name for S&R": "ModelMergeBlocks"
277
+ },
278
+ "widgets_values": [
279
+ 0.5,
280
+ 0.4,
281
+ 0.5
282
+ ]
283
+ },
284
+ {
285
+ "id": 9,
286
+ "type": "CheckpointLoaderSimple",
287
+ "pos": [
288
+ 1969,
289
+ 544
290
+ ],
291
+ "size": {
292
+ "0": 315,
293
+ "1": 98
294
+ },
295
+ "flags": {},
296
+ "order": 3,
297
+ "mode": 0,
298
+ "outputs": [
299
+ {
300
+ "name": "MODEL",
301
+ "type": "MODEL",
302
+ "links": [
303
+ 8
304
+ ],
305
+ "shape": 3,
306
+ "tooltip": "The model used for denoising latents.",
307
+ "slot_index": 0
308
+ },
309
+ {
310
+ "name": "CLIP",
311
+ "type": "CLIP",
312
+ "links": null,
313
+ "shape": 3,
314
+ "tooltip": "The CLIP model used for encoding text prompts."
315
+ },
316
+ {
317
+ "name": "VAE",
318
+ "type": "VAE",
319
+ "links": null,
320
+ "shape": 3,
321
+ "tooltip": "The VAE model used for encoding and decoding images to and from latent space."
322
+ }
323
+ ],
324
+ "properties": {
325
+ "Node name for S&R": "CheckpointLoaderSimple"
326
+ },
327
+ "widgets_values": [
328
+ "cosmixv2WaifuToWife_v10.safetensors"
329
+ ]
330
+ },
331
+ {
332
+ "id": 8,
333
+ "type": "ModelMergeBlocks",
334
+ "pos": [
335
+ 2035,
336
+ 281
337
+ ],
338
+ "size": {
339
+ "0": 315,
340
+ "1": 126
341
+ },
342
+ "flags": {},
343
+ "order": 18,
344
+ "mode": 0,
345
+ "inputs": [
346
+ {
347
+ "name": "model1",
348
+ "type": "MODEL",
349
+ "link": 7
350
+ },
351
+ {
352
+ "name": "model2",
353
+ "type": "MODEL",
354
+ "link": 8
355
+ }
356
+ ],
357
+ "outputs": [
358
+ {
359
+ "name": "MODEL",
360
+ "type": "MODEL",
361
+ "links": [
362
+ 9
363
+ ],
364
+ "shape": 3,
365
+ "slot_index": 0
366
+ }
367
+ ],
368
+ "properties": {
369
+ "Node name for S&R": "ModelMergeBlocks"
370
+ },
371
+ "widgets_values": [
372
+ 0.44,
373
+ 0.33,
374
+ 0.42
375
+ ]
376
+ },
377
+ {
378
+ "id": 11,
379
+ "type": "CheckpointLoaderSimple",
380
+ "pos": [
381
+ 2410,
382
+ 526
383
+ ],
384
+ "size": {
385
+ "0": 315,
386
+ "1": 98
387
+ },
388
+ "flags": {},
389
+ "order": 4,
390
+ "mode": 0,
391
+ "outputs": [
392
+ {
393
+ "name": "MODEL",
394
+ "type": "MODEL",
395
+ "links": [
396
+ 10
397
+ ],
398
+ "shape": 3,
399
+ "tooltip": "The model used for denoising latents.",
400
+ "slot_index": 0
401
+ },
402
+ {
403
+ "name": "CLIP",
404
+ "type": "CLIP",
405
+ "links": null,
406
+ "shape": 3,
407
+ "tooltip": "The CLIP model used for encoding text prompts."
408
+ },
409
+ {
410
+ "name": "VAE",
411
+ "type": "VAE",
412
+ "links": null,
413
+ "shape": 3,
414
+ "tooltip": "The VAE model used for encoding and decoding images to and from latent space."
415
+ }
416
+ ],
417
+ "properties": {
418
+ "Node name for S&R": "CheckpointLoaderSimple"
419
+ },
420
+ "widgets_values": [
421
+ "malaSmoothPonyxl_v20.safetensors"
422
+ ]
423
+ },
424
+ {
425
+ "id": 10,
426
+ "type": "ModelMergeBlocks",
427
+ "pos": [
428
+ 2423,
429
+ 277
430
+ ],
431
+ "size": {
432
+ "0": 315,
433
+ "1": 126
434
+ },
435
+ "flags": {},
436
+ "order": 19,
437
+ "mode": 0,
438
+ "inputs": [
439
+ {
440
+ "name": "model1",
441
+ "type": "MODEL",
442
+ "link": 9
443
+ },
444
+ {
445
+ "name": "model2",
446
+ "type": "MODEL",
447
+ "link": 10
448
+ }
449
+ ],
450
+ "outputs": [
451
+ {
452
+ "name": "MODEL",
453
+ "type": "MODEL",
454
+ "links": [
455
+ 11
456
+ ],
457
+ "shape": 3,
458
+ "slot_index": 0
459
+ }
460
+ ],
461
+ "properties": {
462
+ "Node name for S&R": "ModelMergeBlocks"
463
+ },
464
+ "widgets_values": [
465
+ 0.4,
466
+ 0.25,
467
+ 0.37
468
+ ]
469
+ },
470
+ {
471
+ "id": 12,
472
+ "type": "CheckpointLoaderSimple",
473
+ "pos": [
474
+ 2773,
475
+ 534
476
+ ],
477
+ "size": {
478
+ "0": 315,
479
+ "1": 98
480
+ },
481
+ "flags": {},
482
+ "order": 5,
483
+ "mode": 0,
484
+ "outputs": [
485
+ {
486
+ "name": "MODEL",
487
+ "type": "MODEL",
488
+ "links": [
489
+ 12
490
+ ],
491
+ "shape": 3,
492
+ "tooltip": "The model used for denoising latents.",
493
+ "slot_index": 0
494
+ },
495
+ {
496
+ "name": "CLIP",
497
+ "type": "CLIP",
498
+ "links": null,
499
+ "shape": 3,
500
+ "tooltip": "The CLIP model used for encoding text prompts."
501
+ },
502
+ {
503
+ "name": "VAE",
504
+ "type": "VAE",
505
+ "links": null,
506
+ "shape": 3,
507
+ "tooltip": "The VAE model used for encoding and decoding images to and from latent space."
508
+ }
509
+ ],
510
+ "properties": {
511
+ "Node name for S&R": "CheckpointLoaderSimple"
512
+ },
513
+ "widgets_values": [
514
+ "meichidarkmixReload_meichidarkmixSensual.safetensors"
515
+ ]
516
+ },
517
+ {
518
+ "id": 13,
519
+ "type": "ModelMergeBlocks",
520
+ "pos": [
521
+ 2869,
522
+ 267
523
+ ],
524
+ "size": {
525
+ "0": 315,
526
+ "1": 126
527
+ },
528
+ "flags": {},
529
+ "order": 20,
530
+ "mode": 0,
531
+ "inputs": [
532
+ {
533
+ "name": "model1",
534
+ "type": "MODEL",
535
+ "link": 11
536
+ },
537
+ {
538
+ "name": "model2",
539
+ "type": "MODEL",
540
+ "link": 12
541
+ }
542
+ ],
543
+ "outputs": [
544
+ {
545
+ "name": "MODEL",
546
+ "type": "MODEL",
547
+ "links": [
548
+ 14
549
+ ],
550
+ "shape": 3,
551
+ "slot_index": 0
552
+ }
553
+ ],
554
+ "properties": {
555
+ "Node name for S&R": "ModelMergeBlocks"
556
+ },
557
+ "widgets_values": [
558
+ 0.37,
559
+ 0.25,
560
+ 0.33
561
+ ]
562
+ },
563
+ {
564
+ "id": 14,
565
+ "type": "CheckpointLoaderSimple",
566
+ "pos": [
567
+ 3243,
568
+ 521
569
+ ],
570
+ "size": {
571
+ "0": 315,
572
+ "1": 98
573
+ },
574
+ "flags": {},
575
+ "order": 6,
576
+ "mode": 0,
577
+ "outputs": [
578
+ {
579
+ "name": "MODEL",
580
+ "type": "MODEL",
581
+ "links": [
582
+ 13
583
+ ],
584
+ "shape": 3,
585
+ "tooltip": "The model used for denoising latents.",
586
+ "slot_index": 0
587
+ },
588
+ {
589
+ "name": "CLIP",
590
+ "type": "CLIP",
591
+ "links": null,
592
+ "shape": 3,
593
+ "tooltip": "The CLIP model used for encoding text prompts."
594
+ },
595
+ {
596
+ "name": "VAE",
597
+ "type": "VAE",
598
+ "links": null,
599
+ "shape": 3,
600
+ "tooltip": "The VAE model used for encoding and decoding images to and from latent space."
601
+ }
602
+ ],
603
+ "properties": {
604
+ "Node name for S&R": "CheckpointLoaderSimple"
605
+ },
606
+ "widgets_values": [
607
+ "mistoonXLCopper_v20Fast.safetensors"
608
+ ]
609
+ },
610
+ {
611
+ "id": 17,
612
+ "type": "CheckpointLoaderSimple",
613
+ "pos": [
614
+ 3647,
615
+ 522
616
+ ],
617
+ "size": {
618
+ "0": 315,
619
+ "1": 98
620
+ },
621
+ "flags": {},
622
+ "order": 7,
623
+ "mode": 0,
624
+ "outputs": [
625
+ {
626
+ "name": "MODEL",
627
+ "type": "MODEL",
628
+ "links": [
629
+ 15
630
+ ],
631
+ "shape": 3,
632
+ "tooltip": "The model used for denoising latents.",
633
+ "slot_index": 0
634
+ },
635
+ {
636
+ "name": "CLIP",
637
+ "type": "CLIP",
638
+ "links": null,
639
+ "shape": 3,
640
+ "tooltip": "The CLIP model used for encoding text prompts."
641
+ },
642
+ {
643
+ "name": "VAE",
644
+ "type": "VAE",
645
+ "links": null,
646
+ "shape": 3,
647
+ "tooltip": "The VAE model used for encoding and decoding images to and from latent space."
648
+ }
649
+ ],
650
+ "properties": {
651
+ "Node name for S&R": "CheckpointLoaderSimple"
652
+ },
653
+ "widgets_values": [
654
+ "mocasemix_v10.safetensors"
655
+ ]
656
+ },
657
+ {
658
+ "id": 15,
659
+ "type": "ModelMergeBlocks",
660
+ "pos": [
661
+ 3286,
662
+ 267
663
+ ],
664
+ "size": {
665
+ "0": 315,
666
+ "1": 126
667
+ },
668
+ "flags": {},
669
+ "order": 21,
670
+ "mode": 0,
671
+ "inputs": [
672
+ {
673
+ "name": "model1",
674
+ "type": "MODEL",
675
+ "link": 13
676
+ },
677
+ {
678
+ "name": "model2",
679
+ "type": "MODEL",
680
+ "link": 14
681
+ }
682
+ ],
683
+ "outputs": [
684
+ {
685
+ "name": "MODEL",
686
+ "type": "MODEL",
687
+ "links": [
688
+ 16
689
+ ],
690
+ "shape": 3,
691
+ "slot_index": 0
692
+ }
693
+ ],
694
+ "properties": {
695
+ "Node name for S&R": "ModelMergeBlocks"
696
+ },
697
+ "widgets_values": [
698
+ 0.35000000000000003,
699
+ 0.22,
700
+ 0.3
701
+ ]
702
+ },
703
+ {
704
+ "id": 16,
705
+ "type": "ModelMergeBlocks",
706
+ "pos": [
707
+ 3673,
708
+ 263
709
+ ],
710
+ "size": {
711
+ "0": 315,
712
+ "1": 126
713
+ },
714
+ "flags": {},
715
+ "order": 22,
716
+ "mode": 0,
717
+ "inputs": [
718
+ {
719
+ "name": "model1",
720
+ "type": "MODEL",
721
+ "link": 15
722
+ },
723
+ {
724
+ "name": "model2",
725
+ "type": "MODEL",
726
+ "link": 16
727
+ }
728
+ ],
729
+ "outputs": [
730
+ {
731
+ "name": "MODEL",
732
+ "type": "MODEL",
733
+ "links": [
734
+ 18
735
+ ],
736
+ "shape": 3,
737
+ "slot_index": 0
738
+ }
739
+ ],
740
+ "properties": {
741
+ "Node name for S&R": "ModelMergeBlocks"
742
+ },
743
+ "widgets_values": [
744
+ 0.33,
745
+ 0.2,
746
+ 0.27
747
+ ]
748
+ },
749
+ {
750
+ "id": 18,
751
+ "type": "CheckpointLoaderSimple",
752
+ "pos": [
753
+ 4015,
754
+ 518
755
+ ],
756
+ "size": {
757
+ "0": 315,
758
+ "1": 98
759
+ },
760
+ "flags": {},
761
+ "order": 8,
762
+ "mode": 0,
763
+ "outputs": [
764
+ {
765
+ "name": "MODEL",
766
+ "type": "MODEL",
767
+ "links": [
768
+ 17,
769
+ 21
770
+ ],
771
+ "shape": 3,
772
+ "tooltip": "The model used for denoising latents.",
773
+ "slot_index": 0
774
+ },
775
+ {
776
+ "name": "CLIP",
777
+ "type": "CLIP",
778
+ "links": null,
779
+ "shape": 3,
780
+ "tooltip": "The CLIP model used for encoding text prompts."
781
+ },
782
+ {
783
+ "name": "VAE",
784
+ "type": "VAE",
785
+ "links": null,
786
+ "shape": 3,
787
+ "tooltip": "The VAE model used for encoding and decoding images to and from latent space."
788
+ }
789
+ ],
790
+ "properties": {
791
+ "Node name for S&R": "CheckpointLoaderSimple"
792
+ },
793
+ "widgets_values": [
794
+ "alphonseWhiteDatura_Pony.safetensors"
795
+ ]
796
+ },
797
+ {
798
+ "id": 19,
799
+ "type": "ModelMergeBlocks",
800
+ "pos": [
801
+ 4078,
802
+ 260
803
+ ],
804
+ "size": {
805
+ "0": 315,
806
+ "1": 126
807
+ },
808
+ "flags": {},
809
+ "order": 23,
810
+ "mode": 0,
811
+ "inputs": [
812
+ {
813
+ "name": "model1",
814
+ "type": "MODEL",
815
+ "link": 17
816
+ },
817
+ {
818
+ "name": "model2",
819
+ "type": "MODEL",
820
+ "link": 18
821
+ }
822
+ ],
823
+ "outputs": [
824
+ {
825
+ "name": "MODEL",
826
+ "type": "MODEL",
827
+ "links": [
828
+ 22
829
+ ],
830
+ "shape": 3,
831
+ "slot_index": 0
832
+ }
833
+ ],
834
+ "properties": {
835
+ "Node name for S&R": "ModelMergeBlocks"
836
+ },
837
+ "widgets_values": [
838
+ 0.31,
839
+ 0.18,
840
+ 0.25
841
+ ]
842
+ },
843
+ {
844
+ "id": 20,
845
+ "type": "ModelMergeBlocks",
846
+ "pos": [
847
+ 4479,
848
+ 264
849
+ ],
850
+ "size": {
851
+ "0": 315,
852
+ "1": 126
853
+ },
854
+ "flags": {},
855
+ "order": 24,
856
+ "mode": 0,
857
+ "inputs": [
858
+ {
859
+ "name": "model1",
860
+ "type": "MODEL",
861
+ "link": 21
862
+ },
863
+ {
864
+ "name": "model2",
865
+ "type": "MODEL",
866
+ "link": 22
867
+ }
868
+ ],
869
+ "outputs": [
870
+ {
871
+ "name": "MODEL",
872
+ "type": "MODEL",
873
+ "links": [
874
+ 23
875
+ ],
876
+ "shape": 3,
877
+ "slot_index": 0
878
+ }
879
+ ],
880
+ "properties": {
881
+ "Node name for S&R": "ModelMergeBlocks"
882
+ },
883
+ "widgets_values": [
884
+ 0.3,
885
+ 0.16,
886
+ 0.23
887
+ ]
888
+ },
889
+ {
890
+ "id": 22,
891
+ "type": "CheckpointLoaderSimple",
892
+ "pos": [
893
+ 4475,
894
+ 522
895
+ ],
896
+ "size": {
897
+ "0": 315,
898
+ "1": 98
899
+ },
900
+ "flags": {},
901
+ "order": 9,
902
+ "mode": 0,
903
+ "outputs": [
904
+ {
905
+ "name": "MODEL",
906
+ "type": "MODEL",
907
+ "links": [
908
+ 24
909
+ ],
910
+ "shape": 3,
911
+ "tooltip": "The model used for denoising latents.",
912
+ "slot_index": 0
913
+ },
914
+ {
915
+ "name": "CLIP",
916
+ "type": "CLIP",
917
+ "links": null,
918
+ "shape": 3,
919
+ "tooltip": "The CLIP model used for encoding text prompts."
920
+ },
921
+ {
922
+ "name": "VAE",
923
+ "type": "VAE",
924
+ "links": null,
925
+ "shape": 3,
926
+ "tooltip": "The VAE model used for encoding and decoding images to and from latent space."
927
+ }
928
+ ],
929
+ "properties": {
930
+ "Node name for S&R": "CheckpointLoaderSimple"
931
+ },
932
+ "widgets_values": [
933
+ "pixivponyepitamix_v10.safetensors"
934
+ ]
935
+ },
936
+ {
937
+ "id": 24,
938
+ "type": "CheckpointLoaderSimple",
939
+ "pos": [
940
+ 4889,
941
+ 500
942
+ ],
943
+ "size": {
944
+ "0": 315,
945
+ "1": 98
946
+ },
947
+ "flags": {},
948
+ "order": 10,
949
+ "mode": 0,
950
+ "outputs": [
951
+ {
952
+ "name": "MODEL",
953
+ "type": "MODEL",
954
+ "links": [
955
+ 25
956
+ ],
957
+ "shape": 3,
958
+ "tooltip": "The model used for denoising latents.",
959
+ "slot_index": 0
960
+ },
961
+ {
962
+ "name": "CLIP",
963
+ "type": "CLIP",
964
+ "links": null,
965
+ "shape": 3,
966
+ "tooltip": "The CLIP model used for encoding text prompts."
967
+ },
968
+ {
969
+ "name": "VAE",
970
+ "type": "VAE",
971
+ "links": null,
972
+ "shape": 3,
973
+ "tooltip": "The VAE model used for encoding and decoding images to and from latent space."
974
+ }
975
+ ],
976
+ "properties": {
977
+ "Node name for S&R": "CheckpointLoaderSimple"
978
+ },
979
+ "widgets_values": [
980
+ "pixivponyepitamix_v10_1.safetensors"
981
+ ]
982
+ },
983
+ {
984
+ "id": 21,
985
+ "type": "ModelMergeBlocks",
986
+ "pos": [
987
+ 4890,
988
+ 281
989
+ ],
990
+ "size": {
991
+ "0": 315,
992
+ "1": 126
993
+ },
994
+ "flags": {},
995
+ "order": 25,
996
+ "mode": 0,
997
+ "inputs": [
998
+ {
999
+ "name": "model1",
1000
+ "type": "MODEL",
1001
+ "link": 24
1002
+ },
1003
+ {
1004
+ "name": "model2",
1005
+ "type": "MODEL",
1006
+ "link": 23
1007
+ }
1008
+ ],
1009
+ "outputs": [
1010
+ {
1011
+ "name": "MODEL",
1012
+ "type": "MODEL",
1013
+ "links": [
1014
+ 26
1015
+ ],
1016
+ "shape": 3,
1017
+ "slot_index": 0
1018
+ }
1019
+ ],
1020
+ "properties": {
1021
+ "Node name for S&R": "ModelMergeBlocks"
1022
+ },
1023
+ "widgets_values": [
1024
+ 0.28,
1025
+ 0.15,
1026
+ 0.21
1027
+ ]
1028
+ },
1029
+ {
1030
+ "id": 23,
1031
+ "type": "ModelMergeBlocks",
1032
+ "pos": [
1033
+ 5265,
1034
+ 282
1035
+ ],
1036
+ "size": {
1037
+ "0": 315,
1038
+ "1": 126
1039
+ },
1040
+ "flags": {},
1041
+ "order": 26,
1042
+ "mode": 0,
1043
+ "inputs": [
1044
+ {
1045
+ "name": "model1",
1046
+ "type": "MODEL",
1047
+ "link": 25
1048
+ },
1049
+ {
1050
+ "name": "model2",
1051
+ "type": "MODEL",
1052
+ "link": 26
1053
+ }
1054
+ ],
1055
+ "outputs": [
1056
+ {
1057
+ "name": "MODEL",
1058
+ "type": "MODEL",
1059
+ "links": [
1060
+ 27
1061
+ ],
1062
+ "shape": 3,
1063
+ "slot_index": 0
1064
+ }
1065
+ ],
1066
+ "properties": {
1067
+ "Node name for S&R": "ModelMergeBlocks"
1068
+ },
1069
+ "widgets_values": [
1070
+ 0.27,
1071
+ 0.14,
1072
+ 0.2
1073
+ ]
1074
+ },
1075
+ {
1076
+ "id": 25,
1077
+ "type": "CheckpointLoaderSimple",
1078
+ "pos": [
1079
+ 5290,
1080
+ 485
1081
+ ],
1082
+ "size": {
1083
+ "0": 315,
1084
+ "1": 98
1085
+ },
1086
+ "flags": {},
1087
+ "order": 11,
1088
+ "mode": 0,
1089
+ "outputs": [
1090
+ {
1091
+ "name": "MODEL",
1092
+ "type": "MODEL",
1093
+ "links": [
1094
+ 28
1095
+ ],
1096
+ "shape": 3,
1097
+ "tooltip": "The model used for denoising latents.",
1098
+ "slot_index": 0
1099
+ },
1100
+ {
1101
+ "name": "CLIP",
1102
+ "type": "CLIP",
1103
+ "links": null,
1104
+ "shape": 3,
1105
+ "tooltip": "The CLIP model used for encoding text prompts."
1106
+ },
1107
+ {
1108
+ "name": "VAE",
1109
+ "type": "VAE",
1110
+ "links": null,
1111
+ "shape": 3,
1112
+ "tooltip": "The VAE model used for encoding and decoding images to and from latent space."
1113
+ }
1114
+ ],
1115
+ "properties": {
1116
+ "Node name for S&R": "CheckpointLoaderSimple"
1117
+ },
1118
+ "widgets_values": [
1119
+ "thickCoatingStyle_pdxl10.safetensors"
1120
+ ]
1121
+ },
1122
+ {
1123
+ "id": 26,
1124
+ "type": "ModelMergeBlocks",
1125
+ "pos": [
1126
+ 5680,
1127
+ 260
1128
+ ],
1129
+ "size": {
1130
+ "0": 315,
1131
+ "1": 126
1132
+ },
1133
+ "flags": {},
1134
+ "order": 27,
1135
+ "mode": 0,
1136
+ "inputs": [
1137
+ {
1138
+ "name": "model1",
1139
+ "type": "MODEL",
1140
+ "link": 28
1141
+ },
1142
+ {
1143
+ "name": "model2",
1144
+ "type": "MODEL",
1145
+ "link": 27
1146
+ }
1147
+ ],
1148
+ "outputs": [
1149
+ {
1150
+ "name": "MODEL",
1151
+ "type": "MODEL",
1152
+ "links": [
1153
+ 29
1154
+ ],
1155
+ "shape": 3,
1156
+ "slot_index": 0
1157
+ }
1158
+ ],
1159
+ "properties": {
1160
+ "Node name for S&R": "ModelMergeBlocks"
1161
+ },
1162
+ "widgets_values": [
1163
+ 0.26,
1164
+ 0.13,
1165
+ 0.18
1166
+ ]
1167
+ },
1168
+ {
1169
+ "id": 27,
1170
+ "type": "CheckpointLoaderSimple",
1171
+ "pos": [
1172
+ 5683,
1173
+ 456
1174
+ ],
1175
+ "size": {
1176
+ "0": 315,
1177
+ "1": 98
1178
+ },
1179
+ "flags": {},
1180
+ "order": 12,
1181
+ "mode": 0,
1182
+ "outputs": [
1183
+ {
1184
+ "name": "MODEL",
1185
+ "type": "MODEL",
1186
+ "links": [
1187
+ 30
1188
+ ],
1189
+ "shape": 3,
1190
+ "tooltip": "The model used for denoising latents.",
1191
+ "slot_index": 0
1192
+ },
1193
+ {
1194
+ "name": "CLIP",
1195
+ "type": "CLIP",
1196
+ "links": null,
1197
+ "shape": 3,
1198
+ "tooltip": "The CLIP model used for encoding text prompts."
1199
+ },
1200
+ {
1201
+ "name": "VAE",
1202
+ "type": "VAE",
1203
+ "links": null,
1204
+ "shape": 3,
1205
+ "tooltip": "The VAE model used for encoding and decoding images to and from latent space."
1206
+ }
1207
+ ],
1208
+ "properties": {
1209
+ "Node name for S&R": "CheckpointLoaderSimple"
1210
+ },
1211
+ "widgets_values": [
1212
+ "vendoPonyRealistic_v13Lora.safetensors"
1213
+ ]
1214
+ },
1215
+ {
1216
+ "id": 28,
1217
+ "type": "ModelMergeBlocks",
1218
+ "pos": [
1219
+ 6063,
1220
+ 249
1221
+ ],
1222
+ "size": {
1223
+ "0": 315,
1224
+ "1": 126
1225
+ },
1226
+ "flags": {},
1227
+ "order": 28,
1228
+ "mode": 0,
1229
+ "inputs": [
1230
+ {
1231
+ "name": "model1",
1232
+ "type": "MODEL",
1233
+ "link": 30
1234
+ },
1235
+ {
1236
+ "name": "model2",
1237
+ "type": "MODEL",
1238
+ "link": 29
1239
+ }
1240
+ ],
1241
+ "outputs": [
1242
+ {
1243
+ "name": "MODEL",
1244
+ "type": "MODEL",
1245
+ "links": [
1246
+ 31
1247
+ ],
1248
+ "shape": 3,
1249
+ "slot_index": 0
1250
+ }
1251
+ ],
1252
+ "properties": {
1253
+ "Node name for S&R": "ModelMergeBlocks"
1254
+ },
1255
+ "widgets_values": [
1256
+ 0.25,
1257
+ 0.12,
1258
+ 0.17
1259
+ ]
1260
+ },
1261
+ {
1262
+ "id": 31,
1263
+ "type": "CheckpointSave",
1264
+ "pos": [
1265
+ 6887,
1266
+ 258
1267
+ ],
1268
+ "size": {
1269
+ "0": 315,
1270
+ "1": 98
1271
+ },
1272
+ "flags": {},
1273
+ "order": 30,
1274
+ "mode": 0,
1275
+ "inputs": [
1276
+ {
1277
+ "name": "model",
1278
+ "type": "MODEL",
1279
+ "link": 33
1280
+ },
1281
+ {
1282
+ "name": "clip",
1283
+ "type": "CLIP",
1284
+ "link": 34
1285
+ },
1286
+ {
1287
+ "name": "vae",
1288
+ "type": "VAE",
1289
+ "link": 35
1290
+ }
1291
+ ],
1292
+ "properties": {
1293
+ "Node name for S&R": "CheckpointSave"
1294
+ },
1295
+ "widgets_values": [
1296
+ "checkpoints/ComfyUI"
1297
+ ]
1298
+ },
1299
+ {
1300
+ "id": 29,
1301
+ "type": "ModelMergeBlocks",
1302
+ "pos": [
1303
+ 6425,
1304
+ 247
1305
+ ],
1306
+ "size": {
1307
+ "0": 315,
1308
+ "1": 126
1309
+ },
1310
+ "flags": {},
1311
+ "order": 29,
1312
+ "mode": 0,
1313
+ "inputs": [
1314
+ {
1315
+ "name": "model1",
1316
+ "type": "MODEL",
1317
+ "link": 32
1318
+ },
1319
+ {
1320
+ "name": "model2",
1321
+ "type": "MODEL",
1322
+ "link": 31
1323
+ }
1324
+ ],
1325
+ "outputs": [
1326
+ {
1327
+ "name": "MODEL",
1328
+ "type": "MODEL",
1329
+ "links": [
1330
+ 33
1331
+ ],
1332
+ "shape": 3,
1333
+ "slot_index": 0
1334
+ }
1335
+ ],
1336
+ "properties": {
1337
+ "Node name for S&R": "ModelMergeBlocks"
1338
+ },
1339
+ "widgets_values": [
1340
+ 0.25,
1341
+ 0.11,
1342
+ 0.16
1343
+ ]
1344
+ },
1345
+ {
1346
+ "id": 30,
1347
+ "type": "CheckpointLoaderSimple",
1348
+ "pos": [
1349
+ 6048,
1350
+ 452
1351
+ ],
1352
+ "size": {
1353
+ "0": 315,
1354
+ "1": 98
1355
+ },
1356
+ "flags": {},
1357
+ "order": 13,
1358
+ "mode": 0,
1359
+ "outputs": [
1360
+ {
1361
+ "name": "MODEL",
1362
+ "type": "MODEL",
1363
+ "links": [
1364
+ 32
1365
+ ],
1366
+ "shape": 3,
1367
+ "tooltip": "The model used for denoising latents.",
1368
+ "slot_index": 0
1369
+ },
1370
+ {
1371
+ "name": "CLIP",
1372
+ "type": "CLIP",
1373
+ "links": null,
1374
+ "shape": 3,
1375
+ "tooltip": "The CLIP model used for encoding text prompts.",
1376
+ "slot_index": 1
1377
+ },
1378
+ {
1379
+ "name": "VAE",
1380
+ "type": "VAE",
1381
+ "links": null,
1382
+ "shape": 3,
1383
+ "tooltip": "The VAE model used for encoding and decoding images to and from latent space."
1384
+ }
1385
+ ],
1386
+ "properties": {
1387
+ "Node name for S&R": "CheckpointLoaderSimple"
1388
+ },
1389
+ "widgets_values": [
1390
+ "whiteUnicorn_v30.safetensors"
1391
+ ]
1392
+ },
1393
+ {
1394
+ "id": 5,
1395
+ "type": "CheckpointLoaderSimple",
1396
+ "pos": [
1397
+ 1035,
1398
+ 519
1399
+ ],
1400
+ "size": {
1401
+ "0": 315,
1402
+ "1": 98
1403
+ },
1404
+ "flags": {},
1405
+ "order": 14,
1406
+ "mode": 0,
1407
+ "outputs": [
1408
+ {
1409
+ "name": "MODEL",
1410
+ "type": "MODEL",
1411
+ "links": [
1412
+ 3
1413
+ ],
1414
+ "shape": 3,
1415
+ "tooltip": "The model used for denoising latents.",
1416
+ "slot_index": 0
1417
+ },
1418
+ {
1419
+ "name": "CLIP",
1420
+ "type": "CLIP",
1421
+ "links": [
1422
+ 34
1423
+ ],
1424
+ "shape": 3,
1425
+ "tooltip": "The CLIP model used for encoding text prompts.",
1426
+ "slot_index": 1
1427
+ },
1428
+ {
1429
+ "name": "VAE",
1430
+ "type": "VAE",
1431
+ "links": [
1432
+ 35
1433
+ ],
1434
+ "shape": 3,
1435
+ "tooltip": "The VAE model used for encoding and decoding images to and from latent space.",
1436
+ "slot_index": 2
1437
+ }
1438
+ ],
1439
+ "properties": {
1440
+ "Node name for S&R": "CheckpointLoaderSimple"
1441
+ },
1442
+ "widgets_values": [
1443
+ "pixivponyepitamix_v10.safetensors"
1444
+ ]
1445
+ }
1446
+ ],
1447
+ "links": [
1448
+ [
1449
+ 1,
1450
+ 1,
1451
+ 0,
1452
+ 3,
1453
+ 0,
1454
+ "MODEL"
1455
+ ],
1456
+ [
1457
+ 2,
1458
+ 2,
1459
+ 0,
1460
+ 3,
1461
+ 1,
1462
+ "MODEL"
1463
+ ],
1464
+ [
1465
+ 3,
1466
+ 5,
1467
+ 0,
1468
+ 4,
1469
+ 0,
1470
+ "MODEL"
1471
+ ],
1472
+ [
1473
+ 4,
1474
+ 3,
1475
+ 0,
1476
+ 4,
1477
+ 1,
1478
+ "MODEL"
1479
+ ],
1480
+ [
1481
+ 5,
1482
+ 4,
1483
+ 0,
1484
+ 7,
1485
+ 0,
1486
+ "MODEL"
1487
+ ],
1488
+ [
1489
+ 6,
1490
+ 6,
1491
+ 0,
1492
+ 7,
1493
+ 1,
1494
+ "MODEL"
1495
+ ],
1496
+ [
1497
+ 7,
1498
+ 7,
1499
+ 0,
1500
+ 8,
1501
+ 0,
1502
+ "MODEL"
1503
+ ],
1504
+ [
1505
+ 8,
1506
+ 9,
1507
+ 0,
1508
+ 8,
1509
+ 1,
1510
+ "MODEL"
1511
+ ],
1512
+ [
1513
+ 9,
1514
+ 8,
1515
+ 0,
1516
+ 10,
1517
+ 0,
1518
+ "MODEL"
1519
+ ],
1520
+ [
1521
+ 10,
1522
+ 11,
1523
+ 0,
1524
+ 10,
1525
+ 1,
1526
+ "MODEL"
1527
+ ],
1528
+ [
1529
+ 11,
1530
+ 10,
1531
+ 0,
1532
+ 13,
1533
+ 0,
1534
+ "MODEL"
1535
+ ],
1536
+ [
1537
+ 12,
1538
+ 12,
1539
+ 0,
1540
+ 13,
1541
+ 1,
1542
+ "MODEL"
1543
+ ],
1544
+ [
1545
+ 13,
1546
+ 14,
1547
+ 0,
1548
+ 15,
1549
+ 0,
1550
+ "MODEL"
1551
+ ],
1552
+ [
1553
+ 14,
1554
+ 13,
1555
+ 0,
1556
+ 15,
1557
+ 1,
1558
+ "MODEL"
1559
+ ],
1560
+ [
1561
+ 15,
1562
+ 17,
1563
+ 0,
1564
+ 16,
1565
+ 0,
1566
+ "MODEL"
1567
+ ],
1568
+ [
1569
+ 16,
1570
+ 15,
1571
+ 0,
1572
+ 16,
1573
+ 1,
1574
+ "MODEL"
1575
+ ],
1576
+ [
1577
+ 17,
1578
+ 18,
1579
+ 0,
1580
+ 19,
1581
+ 0,
1582
+ "MODEL"
1583
+ ],
1584
+ [
1585
+ 18,
1586
+ 16,
1587
+ 0,
1588
+ 19,
1589
+ 1,
1590
+ "MODEL"
1591
+ ],
1592
+ [
1593
+ 21,
1594
+ 18,
1595
+ 0,
1596
+ 20,
1597
+ 0,
1598
+ "MODEL"
1599
+ ],
1600
+ [
1601
+ 22,
1602
+ 19,
1603
+ 0,
1604
+ 20,
1605
+ 1,
1606
+ "MODEL"
1607
+ ],
1608
+ [
1609
+ 23,
1610
+ 20,
1611
+ 0,
1612
+ 21,
1613
+ 1,
1614
+ "MODEL"
1615
+ ],
1616
+ [
1617
+ 24,
1618
+ 22,
1619
+ 0,
1620
+ 21,
1621
+ 0,
1622
+ "MODEL"
1623
+ ],
1624
+ [
1625
+ 25,
1626
+ 24,
1627
+ 0,
1628
+ 23,
1629
+ 0,
1630
+ "MODEL"
1631
+ ],
1632
+ [
1633
+ 26,
1634
+ 21,
1635
+ 0,
1636
+ 23,
1637
+ 1,
1638
+ "MODEL"
1639
+ ],
1640
+ [
1641
+ 27,
1642
+ 23,
1643
+ 0,
1644
+ 26,
1645
+ 1,
1646
+ "MODEL"
1647
+ ],
1648
+ [
1649
+ 28,
1650
+ 25,
1651
+ 0,
1652
+ 26,
1653
+ 0,
1654
+ "MODEL"
1655
+ ],
1656
+ [
1657
+ 29,
1658
+ 26,
1659
+ 0,
1660
+ 28,
1661
+ 1,
1662
+ "MODEL"
1663
+ ],
1664
+ [
1665
+ 30,
1666
+ 27,
1667
+ 0,
1668
+ 28,
1669
+ 0,
1670
+ "MODEL"
1671
+ ],
1672
+ [
1673
+ 31,
1674
+ 28,
1675
+ 0,
1676
+ 29,
1677
+ 1,
1678
+ "MODEL"
1679
+ ],
1680
+ [
1681
+ 32,
1682
+ 30,
1683
+ 0,
1684
+ 29,
1685
+ 0,
1686
+ "MODEL"
1687
+ ],
1688
+ [
1689
+ 33,
1690
+ 29,
1691
+ 0,
1692
+ 31,
1693
+ 0,
1694
+ "MODEL"
1695
+ ],
1696
+ [
1697
+ 34,
1698
+ 5,
1699
+ 1,
1700
+ 31,
1701
+ 1,
1702
+ "CLIP"
1703
+ ],
1704
+ [
1705
+ 35,
1706
+ 5,
1707
+ 2,
1708
+ 31,
1709
+ 2,
1710
+ "VAE"
1711
+ ]
1712
+ ],
1713
+ "groups": [],
1714
+ "config": {},
1715
+ "extra": {
1716
+ "ds": {
1717
+ "scale": 1,
1718
+ "offset": [
1719
+ -5635.498001420395,
1720
+ 96.06443919089634
1721
+ ]
1722
+ }
1723
+ },
1724
+ "version": 0.4
1725
+ }