Nexes the Old
Nexesenex
AI & ML interests
Playing with the latest features of KoboldCPP and quants of LlamaCPP.
Sharing my favorite models quants with everyone.
Organizations
Nexesenex's activity
Update README.md
#1 opened about 1 month ago
by
Nexesenex
Miqu / Mistral Medium f16 / bf16 weights
#19 opened about 1 month ago
by
Nexesenex
Superb job
#3 opened about 2 months ago
by
Nexesenex
Are QK and IQ quantizations made from the F16 or BF16 Gguf?
#2 opened 2 months ago
by
Nexesenex
b3389 is already out
2
#1 opened 2 months ago
by
Nexesenex
Good model
8
#2 opened 4 months ago
by
Nexesenex
Thank you VERY MUCH for this!
5
#1 opened 4 months ago
by
Nexesenex
L3 8b abliterated fp16 gguf measurements for information
#2 opened 4 months ago
by
Nexesenex
iMatrix files
8
#1 opened 4 months ago
by
Nexesenex
Promising vitrine model for sub-100b frankenmerges of 70b models!
2
#2 opened 5 months ago
by
Nexesenex
Split quants
12
#1 opened 5 months ago
by
Nexesenex
iMatrix effect on other languages?
1
#10 opened 6 months ago
by
Udo0815
Quantization of output weight
7
#2 opened 6 months ago
by
Nexesenex
Any chance for new IQ1_S versions?
3
#9 opened 6 months ago
by
g4be
Smaug 72b q1_s gguf?
2
#2 opened 7 months ago
by
vonjack
iMatrix, IQ2_XS & IQ2_XXS
13
#2 opened 7 months ago
by
Nexesenex
A good surprise
2
#1 opened 7 months ago
by
Nexesenex
What settings are you using for this iMatrix quant?
3
#1 opened 7 months ago
by
cosmojg
A request for quantization.
3
#1 opened 7 months ago
by
Kotokin
103b and 120b models
2
#7 opened 7 months ago
by
OrangeApples
Your benchs on 2 more models?
8
#1 opened 7 months ago
by
Nexesenex
Surprisingly good for stories
4
#2 opened 8 months ago
by
cliffwalls
Complete the quants
#3 opened 7 months ago
by
Nexesenex
LORA?
3
#1 opened 7 months ago
by
ProphetOfBostrom
iMatrix & remaining quants.
1
#1 opened 7 months ago
by
Nexesenex
Quants requests
2
#2 opened 8 months ago
by
Nexesenex
Benchmarks!
3
#2 opened 8 months ago
by
ChuckMcSneed
q5/q6 GGUF?
1
#1 opened 8 months ago
by
johnnnna
Graphed by gpt4
6
#4 opened 8 months ago
by
deleted
Interesting fate of my quant
9
#1 opened 8 months ago
by
mishima
Thanks for the new quants
2
#1 opened 8 months ago
by
FlareRebellion
Q6/Q8 variants?
2
#6 opened 8 months ago
by
gileneo
Upload tokenizer.model
1
#1 opened 8 months ago
by
Nexesenex
Upload tokenizer.model
#2 opened 8 months ago
by
Nexesenex
Some benchs
1
#1 opened 8 months ago
by
Nexesenex
Low perplexity!
1
#1 opened 8 months ago
by
brucethemoose
CodeLlama-70B
2
#5 opened 8 months ago
by
eramax
An interesting yet useless consideration over the fp16 being out or not.
6
#21 opened 8 months ago
by
Nexesenex
34b models tests
7
#1 opened 8 months ago
by
Nexesenex
Thanks man!
10
#1 opened 8 months ago
by
Nexesenex
Kyllene and MergeMonster
1
#4 opened 8 months ago
by
Nexesenex
Generational loss?
5
#1 opened 8 months ago
by
FlareRebellion
Kooten made IQ quants of Miqumaid v1
1
#3 opened 8 months ago
by
Nexesenex
Memory usage
6
#2 opened 8 months ago
by
sergkisel3v
Benchs are a clean sheet.
1
#2 opened 8 months ago
by
Nexesenex
A one-shot comparison of Miqu IQ2 vs MiquMaid Q2.
4
#1 opened 8 months ago
by
SabinStargem
Might consider attribution
106
#10 opened 8 months ago
by
arthurmensch
Arch speculations
6
#3 opened 8 months ago
by
grimulkan
miqu-1-70b-Requant-b2007-iMat-c32_ch400-IQ3_XXS.gguf Errors
2
#2 opened 8 months ago
by
snombler
Model
25
#5 opened 8 months ago
by
mrfakename
Some benchs
4
#2 opened 8 months ago
by
Nexesenex
Please upload the full model first
88
#1 opened 8 months ago
by
ChuckMcSneed
What is this?
6
#1 opened 8 months ago
by
brucethemoose
Gratitude
7
#1 opened 8 months ago
by
Nexesenex
Upload tokenizer.model
#3 opened 8 months ago
by
Nexesenex