Feedback and collaboration.

#1
by Squish42 - opened

The experimental tool used to build this dataset can be found here https://github.com/Justin42/dsbuild

The 'dataset.yaml' file included in this dataset repository allows the build steps to be easily reviewed and iterated on. Pull requests are enabled. If you would like to suggest a change to this dataset you may submit a PR or discuss it in this thread. Discussion of tool changes to support this dataset are also welcome. That includes additional output formats at request.

@Squish42 Thanks a lot for futher work on this. I was checking your version of the bluemoon.json, and somehow it looks like rather than removing the OOC it adds newlines \nOOC\n around it, retaining that part of the message. Any reason why that could be? Note also that sometimes OOC is typoed like OCC of which there are at least two instances. I guess this should go into the regex too.

Thanks! I missed those completely. Actually, I didn't try to match on OOC at all, but rather (OOC) and ((OOC)) and OOC: . The line breaks are there because those occurrences are actually surrounded by HTML <br/> linebreaks, which are converted to normal \n linebreaks to maintain some additional formatting. They seem to mostly precede character sheets. I suspect we want to keep those.

I've went ahead and made the suggested changes and uploaded the new build.

  • Removed 30 occurrences of \nOOC\n
  • Removed 13 occurrences of ooc;.*($|\n)
  • Removed 1 occurrence of OCC:.*\n
  • Removed 1 occurrence of Ooc:.*\n
  • Laughed at "ahh a regular occ-occurrence"
  • Laughed at "we just talk this out?" he asked, before speaking OOC. "Can I roll diplomacy, to try and seduce her? If it does..."
    I think this is roleplay roleplay.

The new regex for OOC is '(OOC|ooc|OoC|OCC|ooc|Ooc)(:| |-|;)\n?.*?(\n|$)'

Removed some additional edge cases proceeding \nOOC\n occurrences:

  • Removed 1 occurrence of "IC: "

  • Removed 1 occurrence of "Kinda short -- sorry!"

  • Removed 1 occurrence of "Just gonna toss this here so I don't have to hunt for it again."

  • Removed 1 occurrence of "If you want to hurry it up and get to the smut, just let me know XD I'm fine with whatever pace you like best."

  • Removed 1 occurrence of "???OCC. I intentionally jumbled it around a bit to make it more of a blur.???\n"
    Added to edge-cases since it contains unicode.
    It's primarily to prevent running the OOC strip step twice when producing both unicode and non-unicode output.
    If we find more of these unicode ooc's I'll add an additional regex match after the unicode strip step.

I think the rest are character sheets and dice rolls, which probably give good context and maybe unlikely to come out without specific prompting. We can look at cleaning that stuff too if they cause any issues at all.

Thanks again! Let me know if you find anything else, or if by chance I accidentally stripped some context somewhere.

Thanks! I missed those completely. Actually, I didn't try to match on OOC at all, but rather (OOC) and ((OOC)) and OOC: . The line breaks are there because those occurrences are actually surrounded by HTML <br/> linebreaks, which are converted to normal \n linebreaks to maintain some additional formatting. They seem to mostly precede character sheets. I suspect we want to keep those.

Ah okay, that makes sense. This is all pretty nice now; I see most of the OOC and other cruft has been correctly trimmed (at least around the known tokens). There are a few cases still where an "ON" type of label is still remaining, although the OOC preceding them have been correctly removed. See BiC: in 3 cases and one ON:\n. Maybe these can be purged as an additional step to keep things simple.

Edge-case findings:

  • *Edit:

Some additional finds (checked short messages with less or equal than 15 chars, although not all of them are undesired) for which the message in its entirety is:

  • the end^^
  • i think so^^
  • -Placeholder-
  • (
  • Fin^^
  • Fin ^^
  • Fin.^^
  • Molly was
  • beep boop
  • quick track!
  • now what?))
  • ))

With that, it looks like ^^ has also been used as a OOC indicator, but its use is rather inconsistent... Will post more findings here as I find them. There are things like

  • forgot the last part in the last post ^^
  • \nnighty night^^ see you in the morning.)
  • \ngoing to bed ^^ g'night.
  • \n(couldn't think of anything else. ^^
  • (ok, you may kill me now^^)

Also, for general consistency, we might want to reduce all the meaningless double spaces into just one. Not sure what is the obsession of smashing the bar twice between every sentence. Phone posters?

Edit, more remarks: there are some long chains of <><><><> in the dataset, or <>*<>*<>, which do not look very useful. Can maybe get rid of those too. Things like <ahem>, <cough> are alright, but we could consider substituting those < and > for * again for better consistency, and to inhibit any issues with stopping tokens that we've had recently with vicuna tunes.

I thought about pruning some short messages like that before. One of the issues I was concerned about is it affecting the conversation flow. Currently conversations are pruned if they don't have alternating speakers. So if pruning a message introduces a double post the entire conversation will be pruned. This is currently reported at a total of 809 pruned conversations. I suspect a large number of these are just empty or have only opening posts, but I haven't actually examined them at all.

Should we go ahead and prune those messages and see what happens? Worst case a lot of conversations get dropped and we need to think about a better strategy for handling conversation roles and double posts. They're mapped to the actual sender rather than just alternating human/gpt.
The existing FullMatch prunes actually prevent entire conversations from being dropped.

I've went ahead with all of the other changes and uploaded a new build. I'm thoroughly enjoying how easy it is to iterate on this. Thanks a lot for all the suggestions. Keep them coming!

File size actually increased but I can't tell if it was actually due to any regression. I'll have to take a closer look.
It almost certainly has something to do with regexes matching whitespace or consecutive non-word characters.
Tomorrow I'll have some more time to review OOC extractions or diffs, but I think we actually gained context somewhere.

  • Replaced over 1100 occurrences of <> with *
  • Reduced **** to *** (a maximum of 3 consecutive)
    458 hits here now which seem to indicate timelapses and the like, looks mostly alright.
  • Reduced ..... to ....
  • Reduced horizontal whitespace to a maximum of 1 consecutive (previously 3)
  • Removed occurrences of BiC: , bic; and ON:\n
  • Removed 5 occurrences of ^.*\^\^;.*\){1}
    These were the only ones that looked consistent enough to hit with regex without removing other stuff.
    I thought they were remnants of bad regex matches but they're really typed like this.
  • Removed 2 occurrences of "\nthe end?^^"
  • Removed 1 occurrences of "\nthe end^^"
    4 messages containing only "the end^^" still remain.
  • Removed 9 additional edge cases:
    "nighty night^^ see you in the morning.)"
    "oh yes, Abandoned is a good one^^ Lets do the Time Warm Again is really funny too!)"
    "i dunno, lol do you want to?^^ we could always have it that Lucius and Narcissa HAVE to get married or Lucius risks the disinheratance from his family or something like an arranged marraige or something?)"
    "i thought he was a vampire? oh well. ^^"
    "don't frget about Love and Lies!^^)"
    "If I manage to catch your eye, please post. ^^"
    "(couldn't think of anything else. ^^"
    "*Edit: Plus a tank top and a spandex/latex pants blend."
    "THE END? or maybe THAT'S ALL FOLKS! or even better AND THEY ALL LIVED HAPPILY EVER AFTER! (ok, you may kill me now^^)"

If I find a good enough reason to implement sender-specific regex matches or something I'll do it.

Thanks! You're right about the flow of the conversation being jeopardized, I think for now we should only remove those individuals that are the last messages of the thread (possibly things like "the end", "Fin", "beep boop" and some others). Until that is solved with some sophisticated way of handling those, probably better keep them there. Sadly looks like most of them will fall under the gpt's role and not human. (If it's time consuming to implement a last message check, feel free to ignore this, we can deal with it later too).

Some OOC/edge cases remains in the new version:

  • Draco.\nOOC. you remember that not i sent explaining changing and half bloods? now is when it comes in handy XDD)
  • he asked, before speaking OOC. "Can I roll diplomacy, to try and
  • over to join them.\nThe end.^^
  • \ng'night! sleep well^^
  • \ngoing to bed ^^ g'night.
  • (and yes, there is a difference^^)
  • \ngoing to bed. see you in the morning!^^)
  • wanna see Harry's Animagus form?^^ it's so cute you get Three pics!^^\nRawrandLazyandHungry
  • \n^^^ her shorts are just a little longer lol
  • \nshortest RP ever XDD

Probably alright if we treat the^^ OOC as edge cases for now, there are not too many of them thankfully.

Should we similarly limit excessive !!!!!!!, ?????? and ...... to just max three characters? I'm also finding some unnecessarily long repetitions of special characters. Try this: ([^a-zA-Z0-9 !?\.])\1{4,}. Examples:

  • \n>>>>>>>>>><<<<<<<<<<\n
  • ===============================================
  • \n++++++++++++++++\n
  • ______________
  • ///////

Will check more and edit for any findings!

Thanks again, this is all really good.

'he asked, before speaking OOC. "Can I roll diplomacy, to try and..'
This one is funny. It's actually in-character. The characters are roleplaying.
Spoiler: He's going to seduce by rolling a nat 20 with his loaded die.

Included all other suggested changes and uploaded a new build.

I found a lot of additional end of message (ooc). I'm pretty sure we can hit those with the regex, I just didn't want to be overzealous there.
Currently it actually matches for stuff like \n\n(ooc)$ but I suspect closer to (ooc)$ will be fine. When I looked at this before I don't think I saw any in-character context at the end of messages like that.
Expect this update later today.

The excessive non-word characters definitely need to go. .... is still grammatically correct I think. The others are nonsense for sure.
That match worked pretty well I think. Previously it was only targeting consecutive non-words at the beginning and end of messages.

Stripped 385+ additional non-word characters:

  • Removed 29 occurrences of ([^a-zA-Z0-9 !?\.])\1{4,}
    Consecutive non-word characters.
    Renamed trim_nonwords step to strip_nonwords to clarify new intent.
  • Added step reduce_punctuation
    Follows reduce_separators
  • Reduced !!!! to !!! (a maximum of 3 consecutive)
    339 hits.
  • Reduced ???? to ??? (a maximum of 3 consecutive)
    8 hits.
  • Reduced ?!?!?! to ?!?! (a maximum of 2 consecutive)
    8 hits.
  • Reduced !?!?!? to !?!? (a maximum of 2 consecutive)
    1 hit.

Removed 44+ additional edge cases:

  • 1 occurrence of now what?)) replaced with Continue.
    I think this makes sense here specifically.
    I couldn't find any other good candidates for this, and I wouldn't want to do this in too many places anyways.
    We can just remove it if we find a better strategy or decide to drop the conversation.
  • Removed 1 occurrence of \nThe end.^^
  • Removed 1 occurrence of \nthe end?
  • Removed 14 occurrences of \nThe end?
  • Removed 8 occurrences of \nThe end.
  • Removed 1 occurrence of \nThe End. (awwww lol)
  • Removed 3 occurrences of \nThe End.
  • Removed 3 occurrences of \nThe End?
  • Removed 1 occurence of \n~The End~
    I think I might prefer external replacement lists to regex for these kinds of edge cases too but I'm not sure yet.
    It's also just way faster.
  • g'night! sleep well^^
  • going to bed ^^ g'night.
  • (and yes, there is a difference^^)
  • going to bed. see you in the morning!^^)
  • going to bed^^ see you in the morning.)
  • going to bed^^)
  • OOC. you remember that not i sent explaining changing and half bloods? now is when it comes in handy XDD)
  • wanna see Harry's Animagus form?^^ it's so cute you get Three pics!^^\nRawrandLazyandHungry
  • \n^^^ her shorts are just a little longer lol
  • (HAD TO SAY IT)
  • (I didn't really know how to lead into what happens next, so I'll leave that up to you. ^^

Pruned 14 additional messages:

  • The End ^^\nshortest RP ever XDD
    1 pruned message changing the last speaker role to "gpt".
  • beep boop
    1 pruned message changing the last speaker role to "human".
  • the end^^
    3 pruned messages changing the last speaker role to "human".
    1 pruned message changing the last speaker role to "gpt".
  • i think so^^
    1 pruned message changing the last speaker role to "gpt"
  • Fin^^
    1 pruned message changing the last speaker role to "human".
  • Fin ^^
    1 pruned message changing the last speaker role to "gpt".
  • Fin.^^
    1 pruned message changing the last speaker role to "gpt".
  • -Placeholder-
    1 pruned conversation containing only an opening message.
  • Molly was
    1 pruned conversation containing only an opening message.

Currently at 811 pruned conversations now. I'll probably add an extraction for this so we can better inspect for conversation flow in the dropped conversations. I think it's all what we'd expect though.

Coming soon:

  • A lot of additional OOC at the end of messages.

I think implementing some transformers for last message matching is a good idea, I'll take a look at this soon too. I think a couple other things to help with conversation flow could be good but I'm not sure what yet.

There are still 3 messages containing only )). 2 of which we don't have an effective strategy to remove yet due to conversation flow.
One of those can potentially be dealt with by also removing the next message, with zero loss of context. I'll make sure a transformer implementing this strategy is implemented.
We'll also get one after last message matching is implemented.
Which will still leave one. Maybe a 'Continue.' is fine there too. I don't like it though.

Just to clarify, we want to resolve with "gpt" as the last speaker, right?
I just assume this because "gpt" should always respond. But I don't actually know what it expects during training.

Some tooling notes:
Builds are currently 2 minutes and 24 seconds., writing out to RAM. There's a lot of low-hanging fruit for optimization though.
A bigger change, post could run in parallel with only a 1 conversation buffer (since we assume all input is presorted). The reason it doesn't currently is actually because it can be used by multiple output streams.
That can be offloaded to some buffered broadcast stream implementation. Memory consumption should decrease (we never need all data in RAM anymore), and build times should hopefully decrease as well. It also introduces a potential entrypoint for distributed post, it can be a socket to a remote machine or another local dart isolate (for multithreading). Everything there can be efficiently serialized and streamed.
This dataset is small enough for that stuff not to matter though.

Great! I pulled the latest. Excellent work on the tool as well, will make pruning work so much easier in the future. I have a 30b model ready (not up yet) based on the dataset before my previous message, although I did some manual removal of newly found items that I mentioned while waiting for your update. The training job was scheduled already earlier, so it couldn't wait for the most recent fixes unfortunately, but in any case we'll probably have another finetune relatively soon. The state of the current dataset in any case has already been much better than the first dataset that the bluemoon model was ever trained on.

Next round of random finds:

  • \n***THE END***
  • \nTHE END, perfectly timed question lol
  • ////brooooo sorry my auto correct went to Hatsumi lol it was supposed to be Natsu lol\n
  • (opps lol I went back and your the one that I was foxng to suggest and ask to do a demon/human or werewolf/human thing with palms face)
  • :: Why is it you make your text SO big? Lol. ::\n
  • Because I can't see that good even though I have glasses. LOL xD\n
  • :: Oh, okay. Haha. Just wondering. ^_^ ::\n
  • (man messed up the name already XD )
  • Hmm I know, but its to much effort tonightmaybe tomorrow. XD And I couldnt be lucky enough to have a bread maker, but I do know how to make bread, just dont do it a lot. XD)\n

Just to clarify, we want to resolve with "gpt" as the last speaker, right?
I just assume this because "gpt" should always respond. But I don't actually know what it expects during training.

Not sure if this affects the model behavior, I would also be interested to know if it does. Based on the experience with the previous models, it probably makes no significant difference, but no real data to back this up. I might leave those if there's some significant quality prose from the "human", but I guess the low quality ones we could remove at least.

Awesome! Looking forward to trying out the 30b. Although a bit slowly on my hardware. The 13b's have been really creative while still maintaining coherency, so it will be interesting to see how 30b does.

Uploaded a new build including all suggestions except for one, note below, coming next.

Removed 7 additional edge cases:

  • THE END, perfectly timed question lol
  • ////brooooo sorry my auto correct went to Hatsumi lol it was supposed to be Natsu lol
  • :: Why is it you make your text SO big? Lol. ::
  • Because I can't see that good even though I have glasses. LOL xD
  • :: Oh, okay. Haha. Just wondering. ^_^ ::
  • (man messed up the name already XD )
  • Hmm I know, but its to much effort tonightmaybe tomorrow. XD And I couldnt be lucky enough to have a bread maker, but I do know how to make bread, just dont do it a lot. XD)
    The remaining \n should get trimmed in all these cases too.

One remaining:

  • (opps lol I went back and your the one that I was foxng to suggest and ask to do a demon/human or werewolf/human thing with palms face)
    This one occurs at end of message. I'm going to try to hit a bunch of these with regex.

And bonuses

Replaced 10 occurrences of spelling errors:

  • Replaced 4 occurrences of "possiable" with "possible"
    Includes 1 occurrence of "impossiable" with "impossible"
  • Replaced 6 occurrences of "possiably" with "possibly"

Pruned 13 additional messages:

  • Hahahaha, Yeah, I was a little confused
  • Lol do you want to delted this thread possiably and start a new one I'm sorry
  • Sure, what ever you want!
  • How about a demon/human what do you want the plot to be and if you could start it
  • Okay, do you want to be the demon, or do you want me to be the demon?
  • You be the demon", "Okay, do you want to start today or tomorrow?
  • Today if possiable and what's the plot
  • Hmmmmm I have no idea hahahaha you?
  • Hmmm demon falls for a human even though he ain't supposed to
  • Ok, sounds good! Anything else you want in?
  • Not really maybe demon fight\nAnd that the demon has to shroud him self in human form and reveals himself to her
  • Okay, I'll make it a bit later.
  • Okay can't wait
    Final speaker role remains "gpt".
    A full OOC exchange at the end of a conversation. Only saw this because it followed the other OOC you mentioned.
    I should actually make a case-sensitive FullMatch step.

Refactoring:

  • ooc_prune step moved to last preprocessing step to make it easier to reason about.
  • Updated existing ooc_prune patterns to account for reduce_separators, and reduce_punctuation steps.
    These are still applied before postprocessing steps.
    strip_nonwords and edge_cases could potentially be refactored into preprocessing, with a new step in post for unicode edge cases.
    I have no idea what the unicode builds look like but if there is any interest I can check it out.

I think hash-based full message pruning is going to come soon too. It makes more sense in some cases like this. Especially if there's a need to avoid redistributing the pruned data. Hits can just be reversed during the build anyways. There's still some disadvantages though.

Oops. Fixed pruned conversation due to refactor. I forgot the new spelling mistake in there.
The one with 13 pruned messages. id 557943518 for reference. Back to 811 total pruned conversations.

New build uploaded.

Thanks for the update. Rest of the OOC seems to be more hidden along the lines, I'll update later if I find anything more. The most prominent remaining thing are the endless typos. Optionally, one could use some sort of spell check script to carefully take care of the most obvious ones, although that might prove to be difficult due to the uniqueness of some of the actual words used in the RP. One can think about it.

Bluemoon 30b is up: https://huggingface.co/reeducator/bluemoonrp-30b
Updated 13b will follow at some point when the dataset work converges more.

The spelling is pretty bad but maybe the same people making the same mistakes actually makes it easier. I think we can take care of a lot of it pretty quickly, I've mostly just put it off because it's an actual content change. We're getting a lot closer to that stuff now though. It seems like the remaining OOC and conversation flow stuff will be the only major changes remaining.

Today I'm planning to implement multithreaded postprocessing. First steps look good, single-threaded memory consumption is way down and postprocessing now runs in parallel with preprocessing, no increase in build time. The postprocessing buffer just wasn't useful there. A new buffer will be implemented for resequencing incoming conversations from the postprocessing workers before being sent to the writers. Mostly to avoid dropping conversations or stalling on slow workers. Preprocessing is technically similar and will come in the next week or so. Distributed processing to remote workers will closely follow. This is all happening in a separate branch, so we can continue implementing new transformers or whatever in the meantime.

If all goes well I should be able to spend more time on the dataset today. Because of the current state I just want to make sure I review the OOC extractions for loss of context, before adding more regex or making any major changes. From what I've seen there's relevant context in single parentheses in the middle of messages. I assume we want that stuff, but if not we can simplify our strategy there. Similarly, if we find senders who are repeatedly using that for OOC, we can look at targeting those specifically. Occurrences at the beginning and end of messages seem to be more often actual OOC, and I suspect any loss of context there will be pretty minimal.

As a side note, as this dataset reaches a more stable state we can consider other datasets too. I'm slowly formulating a strategy for ShareGPT but it wont be soon. I may be able to tackle something like Creative Freedom's Alpha 1x1 or Beta 1x1 pretty easily in the next couple weeks or so.

Sounds good. The typos are pretty low priority I think, but one can think of some strategy to take care of some of those while working on other things.

It seems like the remaining OOC and conversation flow stuff will be the only major changes remaining.

To fix the flow, one possibility is to as a first processing step to remove all the individual low quality OOC-only messages, and then afterwards merge those messages that come from the same username, if they end up being consecutive after such action.

As a side note, as this dataset reaches a more stable state we can consider other datasets too. I'm slowly formulating a strategy for ShareGPT but it wont be soon. I may be able to tackle something like Creative Freedom's Alpha 1x1 or Beta 1x1 pretty easily in the next couple weeks or so.

For the ShareGPT are you planning to continue on gozfarb's version or start from the original dataset and prune the old findings from that? Not sure how much of that discussion was lost on gozfarb's exit, don't remember now.

Also related, there's another bluemoon "general" (https://rentry.org/qib8f) dataset which might be worth taking a look into, and perhaps even combining together with the fandom one to create more extensive set. Pretty sure many of the filters already developed here could apply on the majority of content in that other dataset as well.

To fix the flow, one possibility is to as a first processing step to remove all the individual low quality OOC-only messages, and then afterwards merge those messages that come from the same username, if they end up being consecutive after such action.

This sounds good. I'll make sure this strategy gets implemented.

For the ShareGPT are you planning to continue on gozfarb's version or start from the original dataset and prune the old findings from that? Not sure how much of that discussion was lost on gozfarb's exit, don't remember now.

I think I'm going to start from the source data. Primarily for that reason, I want to make sure everything can be replicated. It also has certain advantages in cases where the source data is available but unable to be distributed for whatever reason. The dataset.yaml can be distributed and licensed differently from the source data. In other words, if HF ever decides any of the data should be removed it is still fully reproducible from the source and collaboration can still continue.

My strategy is going to be vastly different though, and will likely include an additional content moderation dataset as an artifact. We need to better identify what results in unwanted output. I'm pretty sure a static list of topics will never be effective, no matter how big it is. I'm also pretty sure we can leverage existing models to find associations and similarities between tokens efficiently. I don't know the best way to approach this yet, but I think an effective solution is in there somewhere.

Edit: That original RP set is definitely on my list. If there's interest in that one I'm happy to hit it next.

Edit: That original RP set is definitely on my list. If there's interest in that one I'm happy to hit it next.

At least I'm interested! \o Would be nice so we can combine both ultimately.

E: random finds again

  • ( Hah! Forge working at Chupacabra would be so amusing! And I hope I handled the girls okay. I'll try and get a better grasp of them all. I had to reference the manga and anime equally for Risa and Arisa, but from what I remember and see of Blair from the anime, she seems to know how and enjoy teasing people in just the right way. )\n
  • (can't spell the first part of it...)
  • READ: Alright, I'm not the best at intros, so here you go. This story takes place in the land of Skyrim. Hopefully you've heard of the game of the year by now.
  • As the title suggests, the majority of this roleplay takes place in the magical time first introduced to us in the stop motion animated classicRudolph's Shiny New Year.
  • \n( Oh! And I'll get a PM together with some ideas for a second RP and what not. Erm, well, if you're still okay with a second. I don't know if it'll be too much or okay since you have two other RPs now. x'D; But just tell me whenever you're interested and feel comfortable. )
  • THIS RP IS OFFICIALLY CLOSED NO FURTHER POSTS WILL BE POSTED AS I HAVE LOST INTEREST IN THE PLOT SORRY FOR THE LOSSES
  • Please be aware this roleplay will contain spoilers for Naruto and Naruto Shippuuden. If you have not watch all of Shippuuden, do NOT read this RP from start to finish. You have been warned.
  • This is a Supernatural Fandom Romance RP between Dean Winchester and Castiel the angel. (Note: One of the rpers has only seen up to season six at the start of this rp. So there will be no spoilers beyond episode 9 of Season 6. Thank you.)\nWARNING! CONTAINS HOMOSEXUAL CONTENT! IF YOU'RE HOMOPHOBIC, GO AWAY!
  • \n*** The Walking Dead RP ***\n
  • Well I doubt he is going to be able to contact me if he is banned....can somebody please tell me why he got banned? I was just rping with him and he role plays well and then I saw he got banned for Ban-Evasion but what exactly was he doing to get banned?
  • Figgure ref\n
  • This rp will be between the character Gin Ichimaru played by True Decade and an OC of mine.\n
  • \nClick to expand...\n (multiple occurences)
  • (d. I hope that this plot idea is based on the first mortal kombat tournament. I'll be waiting for your reply back to our rp with your character intros next ok.)) (+some others with the pattern below)
  • randomname98766789 said:\nIzzy325 said:\nrandomname98766789 said:\nIzzy325 said:\nrandomname98766789 said:\nIzzy325 said:\nrandomname98766789 said:\n (I wonder what are the various "[name] said:" lines, are they part of RP or some issue with the dataset)

There's a batch of OOC around a pattern (d. OOC here ). One can find them with \((d|D)\..*\).

\\n[a-zA-Z][^ ]*[0-9]:
These seem like usernames, might be something one can remove? This gives an idea to check the messages against actual usernames in the original dataset, if that seems feasible.

(some of these are individual messages which removal can work better under the merging scheme)

I tried to make a preliminary test run on the bluemoon general dataset using the descriptor yaml for the fandom for now, and seems like I encountered some issue. You might be already aware of it unless I'm messing up something, but putting the error log below regardless if you want to have a look:

2023-05-26 19:02:42.066007/CONFIG/dsbuild: Loading dataset descriptor from 'dataset.yaml'
2023-05-26 19:02:42.070180/INFO/dsbuild: Validating descriptor.
2023-05-26 19:02:42.070202/CONFIG/dsbuild: Descriptor valid.
2023-05-26 19:02:58.539274/INFO/dsbuild: Hash Result:
File: bluemoon-general-original-roleplays-(1x1).csv
Source: null
sha512: 0aa4f139997cb2ff8ee8efe868a968da2f5f0c86dc6289c609f1674ae41bd5fa7f9a7ed7987f37ff266cd80d4b47bf3fdb9505fd255d0db0ca4e8f066bf32b95
2023-05-26 19:02:58.539299/INFO/dsbuild: Preparing pipeline...
2023-05-26 19:02:58.539316/INFO/dsbuild: Performing transformations...
2023-05-26 19:02:58.606390/FINE/dsbuild: Progress:
Messages processed: 0 / 1
Conversations processed: 0 / 0
2023-05-26 19:03:04.625944/FINE/dsbuild: Progress:
Messages processed: 17837 / 17846
Conversations processed: 0 / 633
2023-05-26 19:03:10.636558/FINE/dsbuild: Progress:
Messages processed: 38198 / 38214
Conversations processed: 0 / 1043
2023-05-26 19:03:16.642627/FINE/dsbuild: Progress:
Messages processed: 57655 / 57674
Conversations processed: 0 / 1392
2023-05-26 19:03:22.642626/FINE/dsbuild: Progress:
Messages processed: 75864 / 75883
Conversations processed: 0 / 1690
2023-05-26 19:03:28.661129/FINE/dsbuild: Progress:
Messages processed: 95409 / 95429
Conversations processed: 0 / 1971
2023-05-26 19:03:34.662476/FINE/dsbuild: Progress:
Messages processed: 112882 / 112903
Conversations processed: 0 / 2227
2023-05-26 19:03:40.668034/FINE/dsbuild: Progress:
Messages processed: 128717 / 128740
Conversations processed: 0 / 2617
2023-05-26 19:03:46.681291/FINE/dsbuild: Progress:
Messages processed: 139869 / 139893
Conversations processed: 0 / 2930
2023-05-26 19:03:52.681815/FINE/dsbuild: Progress:
Messages processed: 155637 / 155661
Conversations processed: 0 / 3187
2023-05-26 19:03:58.692339/FINE/dsbuild: Progress:
Messages processed: 168301 / 168326
Conversations processed: 0 / 3393
2023-05-26 19:04:04.713835/FINE/dsbuild: Progress:
Messages processed: 178264 / 178290
Conversations processed: 0 / 3665
2023-05-26 19:04:10.717485/FINE/dsbuild: Progress:
Messages processed: 188318 / 188344
Conversations processed: 0 / 3925
2023-05-26 19:04:16.726686/FINE/dsbuild: Progress:
Messages processed: 202267 / 202293
Conversations processed: 0 / 4250
2023-05-26 19:04:22.735639/FINE/dsbuild: Progress:
Messages processed: 218990 / 219016
Conversations processed: 0 / 4273
2023-05-26 19:04:28.741660/FINE/dsbuild: Progress:
Messages processed: 232900 / 232926
Conversations processed: 0 / 4594
2023-05-26 19:04:34.762719/FINE/dsbuild: Progress:
Messages processed: 244845 / 244871
Conversations processed: 0 / 4812
2023-05-26 19:04:40.770487/FINE/dsbuild: Progress:
Messages processed: 254324 / 254350
Conversations processed: 0 / 5077
2023-05-26 19:04:46.793758/FINE/dsbuild: Progress:
Messages processed: 262307 / 262333
Conversations processed: 0 / 5277
2023-05-26 19:04:52.803251/FINE/dsbuild: Progress:
Messages processed: 271491 / 271524
Conversations processed: 0 / 5439
Unhandled exception:
type 'int' is not a subtype of type 'String'
#0      CsvReader.read.<anonymous closure> (package:dsbuild/src/readers/csv_reader.dart:41)
#1      _HandlerEventSink.add (dart:async/stream_transformers.dart:209)
#2      _SinkTransformerStreamSubscription._handleData (dart:async/stream_transformers.dart:111)
#3      _RootZone.runUnaryGuarded (dart:async/zone.dart:1594)
#4      _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:339)
#5      _BufferingStreamSubscription._add (dart:async/stream_impl.dart:271)
#6      _SinkTransformerStreamSubscription._add (dart:async/stream_transformers.dart:63)
#7      _EventSinkWrapper.add (dart:async/stream_transformers.dart:13)
#8      CsvToListSink._add (package:csv/csv_to_list_converter.dart:246)
#9      CsvToListSink.add (package:csv/csv_to_list_converter.dart:255)
#10     ComplexConverterStreamEventSink.add (package:csv/src/complex_converter.dart:26)
#11     _SinkTransformerStreamSubscription._handleData (dart:async/stream_transformers.dart:111)
#12     _RootZone.runUnaryGuarded (dart:async/zone.dart:1594)
#13     _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:339)
#14     _BufferingStreamSubscription._add (dart:async/stream_impl.dart:271)
#15     _SinkTransformerStreamSubscription._add (dart:async/stream_transformers.dart:63)
#16     _EventSinkWrapper.add (dart:async/stream_transformers.dart:13)
#17     _StringAdapterSink.add (dart:convert/string_conversion.dart:228)
#18     _StringAdapterSink.addSlice (dart:convert/string_conversion.dart:233)
#19     _Utf8ConversionSink.addSlice (dart:convert/string_conversion.dart:307)
#20     _Utf8ConversionSink.add (dart:convert/string_conversion.dart:300)
#21     _ConverterStreamEventSink.add (dart:convert/chunked_conversion.dart:69)
#22     _SinkTransformerStreamSubscription._handleData (dart:async/stream_transformers.dart:111)
#23     _RootZone.runUnaryGuarded (dart:async/zone.dart:1594)
#24     _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:339)
#25     _BufferingStreamSubscription._add (dart:async/stream_impl.dart:271)
#26     _SyncStreamControllerDispatch._sendData (dart:async/stream_controller.dart:776)
#27     _StreamController._add (dart:async/stream_controller.dart:650)
#28     _StreamController.add (dart:async/stream_controller.dart:598)
#29     _FileStream._readBlock.<anonymous closure> (dart:io/file_impl.dart:125)
<asynchronous suspension>

Thanks a lot! That's really helpful. It looks like it was caused by a message containing only numbers incorrectly being parsed as an integer. Type inference at its best.
2023-05-26 16:32:11.540129/INFO/dsbuild: Build completed in 0:04:30.587748

Fixed and pushed to master branch.
If you're currently using the release version from pub.dev you can update to latest source with
dart pub global activate -s git https://github.com/Justin42/dsbuild.git

An update for multithreading should be coming later today too. If it's in a good state I'll push a new release to pub.dev.
There wont be any resequencing yet so it functions mostly synchronously. When batches are distributed to workers it will always wait for a response for the next batch in the sequence, potentially stalling other workers. This will be changed soon so that incoming responses are buffered and work can continue being distributed to idle workers.
I still expect a big decrease in build time without that optimization, but it will be really important for remote workers.

I haven't been able to get to the next round of dataset updates yet, but the faster build times will be really nice. Hopefully I can get to that today.

Thanks again!

Edit: I skimmed over the output and it really looks quite good. Exciting.

Thanks, it works now after the fix. Yeah, with that the general dataset looks quite okay already. The bluemoon general took 6.8 minutes on my old year 2016 Xeon, so multithreading will be a welcome addition.

I noticed that one can possibly further improve the quality in case of both bluemoons by removing the conversations for which the average "gpt" message length falls into bottom 5%. This looks to be particularly important with the general bluemoon. That way the model can possibly avoid learning giving overly simplistic or short answers, when the general preference seems to be aligned towards longer and more detailed descriptions. I made a script below that takes care of it for now. If you find the approach interesting, you might consider incorporating such optional step as part of the "official" toolchain. Of course it doesn't have to be some bottom %, it can very well be also some fixed, manually specified mean threshold.

import json
import numpy as np
import matplotlib.pyplot as plt

fname = "bluemoon-general.json";
with open(fname,"r") as f:
    d = json.load(f);

#-------------------------------------------------------------------------
#step 1. Drop short and empty threads (less than 3 msg) and gather means
#-------------------------------------------------------------------------

lcg = [];
convOut1 = [];

for conv in d:
    ls = np.array([len(m["value"]) for m in conv["conversations"] if m["from"] == "gpt"]);
    mean = np.mean(ls);

    if len(conv["conversations"]) <= 3 and mean < 2000:
        continue; #consider 3 messages not enough, unless the mean message length is impressive.

    lcg.append(mean);
    convOut1.append(conv);

m = np.mean(lcg);
s = np.std(lcg);
print("Global mean: {} std {}".format(m,s));

#-------------------------------------------------------------------------
#step 2. Drop conversations that have bottom 5% of mean "gpt" reply length
#-------------------------------------------------------------------------

#plot the distribution of mean message length
fig,ax = plt.subplots();
h,bins,_ = ax.hist(lcg,bins=512,color="orange");

cdf = np.cumsum(h);
cdf /= h.sum();
x = 0.5*(bins[1:]+bins[:-1]);
ax2 = ax.twinx();
ax2.plot(x,cdf,color="black",linestyle="--");

t = np.argmax(cdf > 0.05);
print("Threshold mean: {}".format(x[t]));

convOut2 = [];
short = 0;
shortMessages = 0;
totalMessages = 0;

for conv in convOut1:
    totalMessages += len(conv["conversations"]);
    ls = np.array([len(m["value"]) for m in conv["conversations"] if m["from"] == "gpt"]);
    mean = np.mean(ls);
    if mean < x[t]:
        if short < 10: #print a couple of example conversations of those removed
            print(json.dumps(conv["conversations"][:8],indent=2)); #... up to 8 messages
            print("Average lenght: {}, message count: {}".format(mean,len(conv["conversations"])),80*'-');
        short += 1;
        shortMessages += len(conv["conversations"]);
        continue;

    convOut2.append(conv);

print("Removed: {:.3f}% ({:.3f}% messages)".format(100*short/len(convOut1),100*shortMessages/totalMessages));

with open("bluemoon-general-out.json","w") as f:
    json.dump(convOut2,f,indent=2);

plt.show();

Tool updated with support for multithreading.
2023-05-28 00:45:25.467819Z/INFO/dsbuild: Build completed in 0:00:45.456986

Batch sizes can be configured in the build section of the descriptor.

build:
  messageBatch: 5000
  conversationBatch: 100
  threads: 16

threads defaults to logical processor count.

Note that the batch sizes are per worker, and there are separate buffers for preprocessing (message) and postprocessing (conversation).
The effective buffer size with the current strategy is the combined total multiplied by the number of workers. This stuff only matters if you have memory constraints.
You can comfortably use high batch sizes for local workers. Transfer to and from local workers is a move rather than a copy.

Release versions a bit later.
I suspect maybe some possible issue with file writes during pre/post transformers but I haven't tested at all.
Edit: Confirmed bug on file access by multiple threads. If you need to do extractions you can set threads: 0 to default to the single-threaded behavior. I'll probably just change this so it's a new file per-worker or per-batch. I wont bother synchronizing the file access there unless it's actually needed for something, otherwise it is left up to the transformer implementations.

All fixed.
RegexExtract now supports a replacement %worker% in file path, which is replaced by the worker name. For local workers this is just going to be the index. So ooc_%worker%.txt becomes ooc_0.txt or whatever.
I've also added a cleanDirectory option to the build settings. Partially to account for the RegexExtract transformer overriding per-batch if overwrite: true is set, which is the intended behavior now. In other words, if you intend to overwrite files from local workers it is more convenient to just add them to cleanDirectory. Paths are relative and it throws on ../ or ..\ as a simple sanity check. The assumption being that we don't want to delete things outside of the current working directory, even if that isn't actually where the dataset.yaml is. I also assume the listed directories can't be easily recreated (i.e.; a symlink to a network or memory mapped drive or something) so the directories themselves aren't actually deleted. This sounds about right for most use-cases.

build:
  messageBatch: 5000
  conversationBatch: 100
  cleanDirectory: [ 'out' ]

I always leave the extraction step out, but you only need to add *extract_ooc before the *ooc_strip step. I've updated the patterns there. It preserves source data but the format is funny, maybe csv would be better, tbd.

I pushed a new descriptor with batch sizes set. I might not get to a dataset update until tomorrow, but at least it will be much faster.

Nice! The general bluemoon takes now less than 2 mins on my PC, so clearly an improvement.

Alright, finally time for the next round of dataset updates.

The username stuff I will find a way to sort out. I'll probably make a separate usernames.yaml for a single pass extraction of usernames. Then use that as a static replacement list, so everything else is still a single pass. If I find out some of the comments can be stripped via DOM that will be easy. We can match on DOM similar to the anchor text removal.

Hopefully I didn't forget anything. Pretty sure the other OOC mentioned was hit with regex.

New build pushed.

Edit: Some support for stats and message length stuff will come soon too. It will probably work similar to usernames since it's also a two-pass process.


  • Additional end of line OOC \([^\(]+\)$
    Previously matched for \n\n\(.*?\)$
  • Additional start of line OOC ^.*\)\)\n
    65 hits
  • Long mixed context intro ^As the title suggests.*Littleton.

Removed 408 additional edge cases:

  • (can't spell the first part of it...)
  • READ: Alright, I'm not the best at intros, so here you go. This story takes place in the land of Skyrim. Hopefully you've heard of the game of the year by now. We enter with Luke, a man of about 22, who makes his living doing "odd jobs". If someone has a fight to settle, they call him. I someone has a wolf terrorizing their herd of sheep, he's your man.
  • Please be aware this roleplay will contain spoilers for Naruto and Naruto Shippuuden. If you have not watch all of Shippuuden, do NOT read this RP from start to finish. You have been warned. This RP will follow the anime's footsteps while changing bits of the players amusement. Again, proceed with caution.
  • *** The Walking Dead RP ***\n
  • Figgure ref
  • This rp will be between the character Gin Ichimaru played by True Decade and an OC of mine.\n\n---
  • \nClick to expand...\n
    402 occurrences

Pruned 4 additional messages:

  • THIS RP IS OFFICIALLY CLOSED NO FURTHER POSTS WILL BE POSTED AS I HAVE LOST INTEREST IN THE PLOT SORRY FOR THE LOSSES
  • This is a Supernatural Fandom Romance RP between Dean Winchester and Castiel the angel. \n\nWARNING! CONTAINS HOMOSEXUAL CONTENT! IF YOU'RE HOMOPHOBIC, GO AWAY!
    Alters speaker roles for 1 conversation.
  • \n\nThis will be a one x one between myself and Iikaitlynii based off the AMC television series The Walking Dead​
    Alters speaker roles for 1 conversation.
  • Well I doubt he is going to be able to contact me if he is banned....can somebody please tell me why he got banned? I was just rping with him and he role plays well and then I saw he got banned for Ban-Evasion but what exactly was he doing to get banned?
    Changes last speaker role to "gpt" for 1 conversation.

Known issues:

  • 3 Messages containing only ))
    Pending additional conversation flow transformers and last message matching
  • Edge cases around "randomname98766789 said:\nIzzy325 said:\n"
    This previously proceded something like 'Moved from PMs to thread'
  • More weird mixed context around usernames
    They just seem to be random comments, not clear yet. Needs a username extraction or pattern(s) that excludes character sheets. Maybe possible to strip with HTML DOM queries.

Plucked some low hanging fruit for an additional 15-20% reduction in build time.
Builds down to 41 40 37 seconds for me. Down from ~50 seconds with the new rules.

Reduced a lot of unnecessary memory allocations related to case insensitive matching, and switched to pre-allocated arrays where possible.

Edit: Slightly reduced further by altering some progress handling, output counts are also clearer. Messages only count as processed once they are actually ready for output. Any that dropped during post do not count as processed. So now we see an accurate totals at the end.
Edit: Further reduced allocations and decreased build time with fixed-size buffers at pre and post

Thanks a lot! I pulled the changes and I've got a finetune running now.

There's now support for multiple passes. This seemed preferable to requiring separate descriptors or something. This also comes with a CsvExtract transformer that can be used to write message data to csv. The fields are conversation, from, value, and hash. Also included is a FileConcatenate writer that can be used to join output files from multiple workers.

So, now we can define a new pass which extracts usernames and then concatenates everything into a single usernames.csv. There's also support for file globs in a couple places.
I haven't actually stripped any usernames yet, pending some implementation for external replacement lists.

Stats stuff is definitely coming too.

The only data change since the last update is a couple spelling mistakes.


  - pass: &usernames
      input:
        - path: "data/rentry/bluemoon/bluemoon-fanbased-roleplays-1x1.csv"
          sha512: f31d6bd278bc4211736d6aace3917cd0c1d0143bec9bf9d07054f5f9b32060e17399b6ea0935774b2271ac45a88309a60ef8eec8c4ac5283d1b353a255529cc5
          reader: *bluemoon
          steps:
            - type: CsvExtract
              description: "Extract usernames"
              config:
                path: 'out/usernames_%worker%.csv'
                ignoreDuplicates: true
                fields: [ 'from' ]
      output:
        - path: "out/usernames.csv"
          sha512: b05ce4d5bbf6a71cc4154e9373613911e8e849ff9442312aa8f79262bea45022deda9d084a3335e93ee6c490b8552e58a817a33999bc264d73ee726f18c2f8b5
          writer:
            type: FileConcatenate
            config:
              files: [ 'out/usernames_*.csv' ]
              ignoreDuplicateLines: true
              delete: true

A few other feature notes:
Multiple inputs and outputs did work but it hasn't been tested in a while. They probably run in parallel breaking worker messaging and/or determinism. Better worker messaging and a build flag for parallel inputs will come soon to fix both cases. Parallel input is mostly because readers could potentially hit some web api or other long I/O, I don't know that it's useful in any other cases.

I will probably use gRPC for worker communication. This solves a couple problems and also results in a proper protocol. Python workers could easily become a thing. Some capability negotiation for required transformers will be needed.
I've used it quite a bit in Rust and it was pleasant to work with. This could later lead to very efficient binaries for remote workers. I love Rust, but it was the wrong language for this. It's the right language for those kind of servers. tbd for much larger datasets.

More Dart, I'm considering some GUI as well but I haven't put much thought into it yet. Data visualization and descriptor editing would be pretty high on my feature list. Everything is already designed to be used as a library and the actual cli tool is barely more complicated than the example usage. I probably wont do web support, and there's probably no good use case for mobile.
Dart and Flutter is pretty good for UI these days. I don't think I'd be considering it in any other language. I feel like the state of UI development is kind of awful in general, and it's mostly unncessary for a tool like this.

I'm also considering how to improve the overall design of the library while simplifying the pipeline and the descriptor.

The new pipeline would operate only on batches of conversations.
Readers would only emit once a conversation is completed. The rest of the pipeline operates only on batches of conversations.
There's no need to differentiate between postprocessors and writers in the pipeline except for synchronization purposes. Instead, any step could be guaranteed to run on the main thread (or a specific worker).

This removes any need for transformers that act on individual messages. The concatenation step is handled by the reader. Stream events are reduced drastically. No need for separate input and output steps in the descriptor. Writers behave like any other step and would no longer need to take place at the end of the pipeline. With multipass support we still retain the ability to run different steps for different outputs, or we just run an output step and keep going. Outputs are no longer explicitly defined but instead just another transformer step. Unsorted input also becomes more reasonable, since readers will already be expected to buffer messages until the end of a conversation. Maybe an artifacts section so we can still describe outputs.

This should reduce overall complexity of both the library and the descriptor, while also slightly improving performance.

More Dart, I'm considering some GUI as well but I haven't put much thought into it yet.

Here I'm not sure if I'd put the effort to an actual GUI, but rather it might be helpful if the tool could produce a set of pdf plots to some output directory, if specified. The CLI + yaml works very well. Different thing if you actually enjoy GUI programming, never met anyone who did, though :)

The new pipeline would operate only on batches of conversations.
Readers would only emit once a conversation is completed. The rest of the pipeline operates only on batches of conversations.
There's no need to differentiate between postprocessors and writers in the pipeline except for synchronization purposes. Instead, any step could be guaranteed to run on the main thread (or a specific worker).
This removes any need for transformers that act on individual messages. The concatenation step is handled by the reader. Stream events are reduced drastically. No need for separate input and output steps in the descriptor. Writers behave like any other step and would no longer need to take place at the end of the pipeline. With multipass support we still retain the ability to run different steps for different outputs, or we just run an output step and keep going. Outputs are no longer explicitly defined but instead just another transformer step. Unsorted input also becomes more reasonable, since readers will already be expected to buffer messages until the end of a conversation. Maybe an artifacts section so we can still describe outputs.

All of this has been implemented in the latest release (0.1.0-alpha.4).
This breaks the descriptor format, but the only required change for steps is to include sync: main on transformers that were previously referred to as readers or writers. This forces the step to run on the main thread. There is some new log output immediately after constructing the pipeline that displays how transformer steps are grouped by their sync target. The terminology here is a little funny but I think it's better than a run-on (which sounds like a trigger) or host (which sounds unrelated to the threading strategy).

The sync targets are main and local, with two additional unimplemented targets remote and auto. auto will just be intended to merge with the previous group and/or take a hint from the transformer. Everything is currently local by default to match the previous behavior.

There is a minor regression on progress reports. There's no progress for total read messages/conversations yet because that will need to be provided by the transformers now.

It feels pretty flexible, and overall complexity is down. There are no more readers, writers, preprocessors, or postprocessors. Everything is a ConversationTransformer. It also makes the docs pleasant to read with all transformers in the same library and extending from the same base class.

Releases on github also include Linux ARM64 binaries now.

Descriptor updated with new format. No changes to the dataset. Next round of updates will include some transformer updates to better handle edge cases, and will probably come with a dataset update to strip usernames.

Another round of updates. I think this takes care of the username problem. This also comes with a tool update that allows caching binary data from the filesystem. When a transformation step requiring binary data is sent to a worker the required cache data is also sent. So, we can load replacements as an external csv and only have it read from the filesystem once. There's still some limitations where we might want to send some artifact from a worker back to a client, currently that still relies on a shared filesystem. Also it's only binary data, so it still needs to be decoded for each batch. Some more robust solution for synchronizing artifacts will come with support for remote workers.

  • Removed ~395 block quotes containing usernames. Matches dom blockquote.bbCodeBlock
  • Move edge cases and spelling replacements to external csvs
  • Fix some incorrectly escaped edge cases
    some 'the end' stuff

I've implemented the previously suggested strategy of concatenating consecutive messages. It also manages the participant count problem by dropping the participant with the least amount of messages. That might just fix all of the conversation flow issues. Total dropped messages is down to only 3193, which also includes the concatenated messages.

Restored 87,191 additional messages

My next focus is on remote workers, stats, stat-based pruning (like on message length), and graphs. I think it would be nice to be able to leverage pandas so that is probably the direction I will go there.

I had tried to clean this up a short while ago using Karen_TheEditor (13B) (https://huggingface.co/TheBloke/Karen_theEditor_13B-GPTQ).
Here is the result (split into 2 parts):
https://files.catbox.moe/vxqjqg.json
https://files.catbox.moe/1o44yc.json

I tried to fix as many of the grammatical issues as possible and didn't drop any conversations. That said, there are still issues since Karen is not perfect. If I detected any large deviations from the original text, I fell back to a standard spell-checker excluding estimated proper nouns (which is also not perfect).

I intended this as a first pass and mainly wanted to test an automated way to clean language data sets, but I never got around to doing a better job on bluemoon. Still, it is probably way better than the original version. It took me almost a week with 5 GPUs and 10 instances running in parallel, so I don't know when I can spare the compute to try again.

https://files.catbox.moe/vxqjqg.json
https://files.catbox.moe/1o44yc.json

Have these been pushed to the dataset, or is the dataset still the one before the processing?

https://files.catbox.moe/vxqjqg.json
https://files.catbox.moe/1o44yc.json

Have these been pushed to the dataset, or is the dataset still the one before the processing?

Sorry, what do you mean? You can compare to bluemoon.train.json
I'm not aware of anyone publishing any further processing with this dataset, or any additional data that might be available.
I'm also not aware of my work here being published anywhere else.

I thought these:
https://files.catbox.moe/vxqjqg.json
https://files.catbox.moe/1o44yc.json
were the post-processed versions of the bluemoon dataset. I was asking if they have been incorporated into bluemoon.train.json

At first glance it looks like they came from here, or at least were produced by my tool. I'm not sure where you sourced that from, so I can't say without actually joining and hashing them.

As far as I'm aware this is the best version of this dataset. The only additional processing you might like is to split long conversations. If you don't split long conversations, consider using the pruned configuration instead of full. Be sure to check the readme and the dataset.yaml if you would like to review, reproduce, or modify the cleaning steps.

Please let me know if you find additional cleaning that might need to be done, or if there has been additional work on this dataset from others that you would to see here.

I did use the base from this repo, but the ones I linked in catbox were re-reprocessed to remove the huge number of grammatical and spelling errors in the database. Pretty much every single entry is completely different. I think ritabratamaiti was asking if the modified versions I linked are the same as the ones in this repo. I think, not?

I don't think the original version should be replaced, just in case some error crept up somewhere in my processing, but you could create a new branch with the version I linked above. If you don't want to, I can create a separate repo and credit yours as the source. Either way, it is probably a good idea to make the corrected version more easily accessible, given the large amount of compute required to replicate it. I personally would rather train on my re-processed version because there are too many errors in this repo, or any other versions of bluemoon I've seen around the internet, from source (because they are human writers).

Edit: Also, ritabratamaiti was referencing my post a few posts up which explains what I did. That's pretty much the main source. I may have posted about it in reddit also, but it was probably the same post content. I figured if there was anywhere it had to go, it had to go here.

@Grimulkan yup, that was my question! It looks okay reading the first few entries, and if you do make a new repo for the processed dataset ping this thread please

@ritabratamaiti Will do, but I'll let Squish42 decide what they want to do first.

Hi Grimulkan. Thanks for the clarification! I definitely felt like I was missing some context there. Your previous post slipped my mind completely.

So, now that I am feeling more aware. I see two ways forward, and we can do both.

We can integrate those changes here and make it available here as a separate configuration alongside full and pruned (possibly in a new branch). This would also mean adding a new feature to dsbuild to support this kind of AI-postprocessing, which is something I've been wanting to add anyways. This is specifically pending support for remote workers and gRPC. The primary requirement here is that we need to be able to generate a reproducible output. Ideally the new feature would allow a choice of model, seed, and other generation parameters, and we can collectively decide on the defaults for new dataset configurations. It may be a little time before all this happens so I still propose continuing with the other option.

If Grimulkan would like to upload this in a new fork or downstream repository I have no qualms with that at all. The tool can't reproduce that data in it's current state so I think it naturally calls for a downstream project. Also as a fair warning I obviously don't take any credit for the bluemoon source data, but the actual work I've done here in the yaml files I consider public domain. So any links or credit back to me are completely optional. The tool itself is MIT licensed and doesn't attempt to put any restrictions on how the output is used.

As a side note, I'd love to explore some more precision ways of handling these kinds of changes. Specifically for grammar and spelling but, other types of AI postprocessing often feel like a black box and in large datasets it can be very difficult to spot problems. I intend to make a proper go at ShareGPT at some point, and the tool itself is not far off, but I still don't have good solutions to some of these problems. There is already some support for tokenization, and in a lot of cases I think just checking for the same token in a known good dataset can tell us if there are likely to be spelling errors or not, so long as it has equally colorful vocabulary. Grammar is more difficult but if we can accurately classify and review the corrections then an AI output can be more useful, the biggest risk being some latent rejection contaminating the dataset, or context being lost because the vocabulary just doesn't exist in the supporting model.

Better late than never: bluemoon_Karen_cleaned

It is the same as what I posted earlier, no change. Some day I'll get around to spending some compute cleaning/augmenting it with a better model.

Sign up or log in to comment