JavLuv JAV Browser

DScott

Well-Known Member
Jan 27, 2024
377
418
Are you getting A message after scanning saying "This file exists somewhere else in Your collection"?

Im also sure theres a setting you can toggle to make covers or .nfo for movies it finds no Info on. TmpGuy will help you better ofc

Here it is under settings.
View attachment 3487273


If you are a new user then a tip I employ when batch adding a lot of movies but I am unsure whats new and whats not in JAVLUV especially as the NEW [Toggle] doesnt always function the best if release dates are missing and so on. Anyway, I have a folder I dump ALL new downloads into and Call it say NEW-JAV-DOWNLOADS

Now dump all your new jav in there and scan it with Javluv. Now in the search box in JavLuv, Copy and paste the folder name: NEW-JAV-DOWNLOADS and only movies within that folder will be displayed which makes it a thousand times easier if you are a serial hoarder like me. NOTE: You can do this with any folder. Call it a series name, content name like [Massage] or [NAMPA] and only folders, filenames with that wording will be shown.

If you are doing as I stated before though, for a SORTING folder then give it a Unique name and not NAMPA lol and keep it one word like NEWDOWNLOADS2024 or If you say call it NEW-NAMPA-DOWNLOADS then all NAMPA words, All Download words and all NEW words will be flagged but keep it one word and only that entire folder will be displayed if that makes sense. Just smoked a large one lol so........................


EDIT: @TmpGuy Ive done several others now that have failed miserably and the info is there as Ive manually checked. It seems hit and miss atm.
I've run into a few glitches, here and there, where I've found thumb and .nfo files named something like d.mp4 and they are connected to no file. Upon further digging I discover that they are actually associated to say, abc-123 loud.mp4 so I just correct the filenames and all is good. I've only seen this with a couple of files/folder at most. I've also seen the 'error' at the end of the scan saying that I have dupes. No big surprise there for me. In one of TmpGuy's posts he mentioned that javlibrary was preventing the program from accessing its database unless you varify yourself as human. I scanned about 30K files when I first got the program a few days ago and got very good results, not perfect, but certainly very very good. So after reading the info about Javlibrary I deleted all thumbs and nfo files the deleted the program data from appdata/local directory and started again. The only differance is that before I did a scan I logged in to javlibrary first then ran the scan. I can't be certain but it seems that the new scan yielded better results although the head-shot section still has a lot of holes there are still around 16,000 files in the Actress directory so all in all the results are very good. I'll keep my eye on updates. I have a question for you though TmpGuy, Part of my archive structure involves a folder called favactresses. Within that folder I have individual folders for various acrresses. After using your program I tentatively decided that I don't really need that so I could put those files in their higher-level directories, Moodyz, Deeps, etc. Since I can search for particular actress if I want to. Buuuuuuut, and this is why I'm here asking. Let's take my Abe Mikako folder. Within that folder there are perhaps 100 videos. Your program has cataloged most, but not all of them. So, if I were to move all of the files to a variety of upper-level folders then a search on javlove will not display those unfiles. Is that right? In other words it doesn't act as a file manager except for the files that it was able to find nfo/covers for and so the re-integration into higher-level files could have adverse results when trying to find the complete repetoir of a given actress and anything in my directories that was not cataloged would become invisible to javluv. Am I reading that correctly?
 

DScott

Well-Known Member
Jan 27, 2024
377
418
Can someone please tell me if there's a configuration setting to solve this problem; If I do a scan of, for example, my ASW folder which contains about 300 files, before the scan I get a message that says 'scanning 53 files' at the end of the scan I get an errata page that lists a bunch of titles saying that the metdata is not available. However all of the rejected titles are available on Javlibrary. I'm really hoping that someone can tell me how to resolve this problem. Thanks in advance..
 

Attachments

  • Screenshot 2024-05-24 123106.png
    Screenshot 2024-05-24 123106.png
    237.8 KB · Views: 13
  • Screenshot 2024-05-24 123252.png
    Screenshot 2024-05-24 123252.png
    1.5 MB · Views: 13

Fetterbr

Member
Dec 10, 2010
101
5
Would it not be possible to see the error instead of just a box like this ?
 

Attachments

  • Capture500.JPG
    Capture500.JPG
    203.5 KB · Views: 12

SamKook

Grand Wizard
Staff member
Super Moderator
Uploader
May 10, 2009
3,755
5,154
Would it not be possible to see the error instead of just a box like this ?
From only the information on that error message, I can deduce it's trying to rename or move a file but the destination(because path2 instead of path or path1) name/path is empty(null means there is no value for it).

I doubt seeing a raw error message would make it easier to understand, quite probably the opposite.
 

Fetterbr

Member
Dec 10, 2010
101
5
From only the information on that error message, I can deduce it's trying to rename or move a file but the destination(because path2 instead of path or path1) name/path is empty(null means there is no value for it).

I doubt seeing a raw error message would make it easier to understand, quite probably the opposite.
So means it haven't created the movie folder yet ? Not sure how to handle this. the same system what is suppose to create a folder gives me an error that it cant find the folder ? I asume it has something to do with missing information from the net, scraper what cant scrape or something. Can also be this stupid bug with windows explorer
 

SamKook

Grand Wizard
Staff member
Super Moderator
Uploader
May 10, 2009
3,755
5,154
To figure this out, you'd want to go check if there's a problem creating files or folders at the intended destination, maybe try with a different one to see if you have the same issue, assuming you can since I have no clue what you're doing or how it works. Since javluv has a lot of automation, I don't know if it's something you would know or not.
If you can't know, you'll have to wait for TmpGuy, but if you can, try to see what could prevent it from being created, either you'll figure out how to fix it or it'll help him figure it out and possibly handle that problem better, if it's possible.
 

TmpGuy

JavLuv author, lesbian connoisseur
Jun 1, 2013
797
1,055
Would it not be possible to see the error instead of just a box like this ?

That's normally what's supposed to happen. Unfortunately, there's a bug in JavLuv when movie metadata is not found (or partially found), and then it tries to rename the movie based on some missing information, resulting in that error. I'll try to tackle that one as well when I get a chance.

If this is causing you problems, my recommendation is to uncheck the "Move / Rename after scan" in the scan dialog box. Once movies are successfully imported, you can then move/rename them as a separate step. Apologies for the inconvenience in the meantime.
 

TmpGuy

JavLuv author, lesbian connoisseur
Jun 1, 2013
797
1,055
Can someone please tell me if there's a configuration setting to solve this problem; If I do a scan of, for example, my ASW folder which contains about 300 files, before the scan I get a message that says 'scanning 53 files' at the end of the scan I get an errata page that lists a bunch of titles saying that the metdata is not available. However all of the rejected titles are available on Javlibrary. I'm really hoping that someone can tell me how to resolve this problem. Thanks in advance..

As I mentioned earlier, JavLibrary recently introduced some anti-scraping technology, so JavLuv's scraper is currently broken. I'm working on developing an alternate scraping solution that can deal with this.
 

Fetterbr

Member
Dec 10, 2010
101
5
As I mentioned earlier, JavLibrary recently introduced some anti-scraping technology, so JavLuv's scraper is currently broken. I'm working on developing an alternate scraping solution that can deal with this.
Please also include that we can add sites our self to scrape.
 

DScott

Well-Known Member
Jan 27, 2024
377
418
As I mentioned earlier, JavLibrary recently introduced some anti-scraping technology, so JavLuv's scraper is currently broken. I'm working on developing an alternate scraping solution that can deal with this.
Ok. Thanks TmpGuy. I knew that you had mentioned the issue with javlibrary but since it is picking up some, but not all, I figured that maybe I had some configuration error. I'll wait for updates and if not, I'm still very happy with what it does now. Cheers.
 

TmpGuy

JavLuv author, lesbian connoisseur
Jun 1, 2013
797
1,055
Please also include that we can add sites our self to scrape.

Ah, I wish scraping were that easy. Each scraper has to be specifically tailored for a given site by parsing the HTML and extracting information presented in a very site-specific way. Every one of them works differently. So each site tends to have a few pages of code very specific to that site to extract data from the page and deal with whatever site-specific quirks it may have.

Someday maybe I'll be able to train an AI model to figure it out like a human does, more intuitively. But for now, I'm hand-authoring HTML parsers.
 
  • Like
Reactions: Moxy and rumburack

SamKook

Grand Wizard
Staff member
Super Moderator
Uploader
May 10, 2009
3,755
5,154
The way I've done it in my script to make it as little work for me as possible to add a new scraper is to have one function that handles every possible case of the way specific data can be returned and basically only have the site info location for each sites I add.

For example, my function to scrape the Dandy website looks like this:
Code:
def Dandy(Choice):
    TemplateInfo = StudioInfo("//div[@id='detailMain']/img[2]")
    TemplateInfo.agecheck = ("//input[@value='Yes']", 1)
    TemplateInfo.cover = ("//div[@id='detailMain']/img[1]", 1, 0, "Front")
    TemplateInfo.altcover = ("//div[@id='detailMain']/img[2]", 1, 0, "Back")
    TemplateInfo.title = ("//div[@id='titleBox']/p", 1)
    TemplateInfo.date = ("//dl[@id='itemDatas']/dd[3]", 1)
    TemplateInfo.director = ("//dl[@id='itemDatas']/dd[2]", 1)
    TemplateInfo.runtime = ("//dl[@id='itemDatas']/dd[4]", 1)
    TemplateInfo.genres = ("//dl[@id='keywords']//a", 1)
    TemplateInfo.actresses = ("//a[@class='actress']", 1, 0)
    TemplateFull(TemplateInfo, Choice)
    DMM_mono_cover(Choice, f"1{CodeWeb.replace('-', '')}", "//a[@id='sample-image1']")

It's just one class with an xpath for the location of the info on the webpage and a few extra info that could be needed. Then it calls the main scraper function and extra stuff if needed, like getting the cover from somewhere else in this case.

Not saying that's the way you should do it since that is a lot of work and even if you did and allowed users to fill out the class values themselves to create a custom scraper, there's often little things that need to be tweaked to handle weird cases so you often have to add a little thing to the main function and users can't do that.
You also end up with over 100 lines of code just to handle actresses name so definitely increases the complexity.

Just food for thoughts.
 
  • Like
Reactions: mei2

Fetterbr

Member
Dec 10, 2010
101
5
The way I've done it in my script to make it as little work for me as possible to add a new scraper is to have one function that handles every possible case of the way specific data can be returned and basically only have the site info location for each sites I add.

For example, my function to scrape the Dandy website looks like this:
Code:
def Dandy(Choice):
    TemplateInfo = StudioInfo("//div[@id='detailMain']/img[2]")
    TemplateInfo.agecheck = ("//input[@value='Yes']", 1)
    TemplateInfo.cover = ("//div[@id='detailMain']/img[1]", 1, 0, "Front")
    TemplateInfo.altcover = ("//div[@id='detailMain']/img[2]", 1, 0, "Back")
    TemplateInfo.title = ("//div[@id='titleBox']/p", 1)
    TemplateInfo.date = ("//dl[@id='itemDatas']/dd[3]", 1)
    TemplateInfo.director = ("//dl[@id='itemDatas']/dd[2]", 1)
    TemplateInfo.runtime = ("//dl[@id='itemDatas']/dd[4]", 1)
    TemplateInfo.genres = ("//dl[@id='keywords']//a", 1)
    TemplateInfo.actresses = ("//a[@class='actress']", 1, 0)
    TemplateFull(TemplateInfo, Choice)
    DMM_mono_cover(Choice, f"1{CodeWeb.replace('-', '')}", "//a[@id='sample-image1']")

It's just one class with an xpath for the location of the info on the webpage and a few extra info that could be needed. Then it calls the main scraper function and extra stuff if needed, like getting the cover from somewhere else in this case.

Not saying that's the way you should do it since that is a lot of work and even if you did and allowed users to fill out the class values themselves to create a custom scraper, there's often little things that need to be tweaked to handle weird cases so you often have to add a little thing to the main function and users can't do that.
You also end up with over 100 lines of code just to handle actresses name so definitely increases the complexity.

Just food for thoughts.
there is always a need to add manual info especially from the more kinky styff what is not avalible on the normal sites for scraping. I would wish that for instance the 2 sites below could be included to also have little more kinky.

 

SamKook

Grand Wizard
Staff member
Super Moderator
Uploader
May 10, 2009
3,755
5,154
Problem is that even simply having to provide the xpath to a very complex catch all function like I do is going to be too hard for the average user and it's a ton more initial work than just making a custom scraper for each sites.
A programmer can also just add extra scrapers or submit them to him since it is open source.

You're then left with very few users in the middle who have enough technical knowledge to be able to provide the location of each separate information on a website but are not good at programming to make their own scraper to add to it and the time spent making that solution would possibly just be wasted.

With the usual programming methods we have today, you can't really have something that can find the info like a human is able to on any website so the only other option left is this:
Someday maybe I'll be able to train an AI model to figure it out like a human does, more intuitively. But for now, I'm hand-authoring HTML parsers.
 

TmpGuy

JavLuv author, lesbian connoisseur
Jun 1, 2013
797
1,055
The way I've done it in my script to make it as little work for me as possible to add a new scraper is to have one function that handles every possible case of the way specific data can be returned and basically only have the site info location for each sites I add.

For example, my function to scrape the Dandy website looks like this:
Code:
def Dandy(Choice):
    TemplateInfo = StudioInfo("//div[@id='detailMain']/img[2]")
    TemplateInfo.agecheck = ("//input[@value='Yes']", 1)
    TemplateInfo.cover = ("//div[@id='detailMain']/img[1]", 1, 0, "Front")
    TemplateInfo.altcover = ("//div[@id='detailMain']/img[2]", 1, 0, "Back")
    TemplateInfo.title = ("//div[@id='titleBox']/p", 1)
    TemplateInfo.date = ("//dl[@id='itemDatas']/dd[3]", 1)
    TemplateInfo.director = ("//dl[@id='itemDatas']/dd[2]", 1)
    TemplateInfo.runtime = ("//dl[@id='itemDatas']/dd[4]", 1)
    TemplateInfo.genres = ("//dl[@id='keywords']//a", 1)
    TemplateInfo.actresses = ("//a[@class='actress']", 1, 0)
    TemplateFull(TemplateInfo, Choice)
    DMM_mono_cover(Choice, f"1{CodeWeb.replace('-', '')}", "//a[@id='sample-image1']")

It's just one class with an xpath for the location of the info on the webpage and a few extra info that could be needed. Then it calls the main scraper function and extra stuff if needed, like getting the cover from somewhere else in this case.

Not saying that's the way you should do it since that is a lot of work and even if you did and allowed users to fill out the class values themselves to create a custom scraper, there's often little things that need to be tweaked to handle weird cases so you often have to add a little thing to the main function and users can't do that.
You also end up with over 100 lines of code just to handle actresses name so definitely increases the complexity.

Just food for thoughts.

Yeah, there are other case-specific weirdness, like JavLibrary.com's search by ID function returns ambiguous results for ID numbers less than 100, which have to be disambiguated with a second parsing. Or they prepend the ID to the title, which I strip off. For other sites, there's English vs Japanese name order, different date formats, combined or separate body measurements, censored words in the titles (which JavLuv tries to figure out using a table), etc, etc....

Hopefully someday that AI-based scraper will be a realistic option.
 

TmpGuy

JavLuv author, lesbian connoisseur
Jun 1, 2013
797
1,055
Recently and often I have scan a folder of MKV muxed titles more than once for the video dimensions to show, otherwise they are 0. Is this a known thing I'm forgetting about?

Are these newly scanned movies, or older scans? If it's newly scanned movies, it may be a bug. Otherwise, this is probably expected behavior.

The video dimensions was a feature added later, after some people may have already scanned many movies and generated metadata. So I put in a feature that very slowly updates existing metadata over time in the background. However, newly scanned movies should scan for dimensions immediately. So I think that's what you may be seeing. In theory, if you leave thing alone, eventually everything should get updated with proper dimensions.
 
  • Like
Reactions: Casshern2

Moxy

JAV Archiever
Dec 22, 2009
521
32
Another small update of the JavLibrary issue. Be careful also of your actress data, Id back up just incase. Ive had several that have created new actresses for movies and ive been baffled to why but its simply putting the scraped data the Javlibrary way of reversed as in: Akari Asagiri to Asagiri Akari.

If it happens and you think you've lost all your precious data on a actresses. Just put the name either way into javluv and just merge the blank one to the main one. I wouldn't delete the one thats wrong, dont seem to like that :)
 

tre11

New Member
Oct 31, 2023
5
0
i luv javluv,
btw how to show the jav cover only when hdd is plugged, and hide the jav cover when hdd is unplugged from PC. without scanning again.
because i have many hdd, iam so dizzy when choosing the movie when hdd is unplugged, cant play,but the cover still showing up

thank you.
 

soles

New Member
May 7, 2024
4
1
Another small update of the JavLibrary issue. Be careful also of your actress data, Id back up just incase. Ive had several that have created new actresses for movies and ive been baffled to why but its simply putting the scraped data the Javlibrary way of reversed as in: Akari Asagiri to Asagiri Akari.

If it happens and you think you've lost all your precious data on a actresses. Just put the name either way into javluv and just merge the blank one to the main one. I wouldn't delete the one thats wrong, dont seem to like that :)
With all due respect, this sounds like a storage management issue, not a JavLuv issue.