Yincognito wrote: ↑March 29th, 2020, 6:38 pm
Haha, don't worry, the league is open to promotion and relegation, so to speak - everybody has a place, and can rise or fall as time goes by.
That being said, even more helpful would be to experiment and work with the code, instead of knowing the manual by heart and failing to write an effective piece of code. But since you mentioned producing results from the Rainmeter docs only, doesn't setting the URL to
https://www.google.com/search?sitesearch=docs.rainmeter.net&q=#ToSearch# provide you with that? Or you're using one of the various versions of the skins posted here and you don't know how to modify it (and if so, which one)? Please clarify.
feeling somewhat ashamed at my lack of skills and overwhelemed with the your guyses talent rendered me a silent observer of this.
however seeing the interaction between Yincognito and cordemanon gave me a new burst of hope. I have been using what I have been calling DocuGetv3 by eclectic-tech (v1 by balala and v2 by yincognito)
for some reason the skin is not properly searching anymore, and instead appearing blank. I think this might be an issue with the google search URL or the User agent. which when trying to look into I was VERY overwhelmed with info that I simply did not understand.
now about me wanting it to load the whole page into a scrollable rainmeter window...
Balala wrote:Why? I doubt this would be a good idea. What can you do with the whole content of a page?
Yincognito wrote:Ok now, first let me say that parsing entire HTML pages, while not (technically) impossible using regex, is going to be very tough - and I have some experience in that, LOL. The bare text might be "easier" to get (by, say, eliminating tags </tag>.*</tag> from the page source, along with expertly extracting the right strings from within quotes), but if you want to convert HTML headings and such into Rainmeter inline format ... well, let's say it's going to take a LOT of effort (to put it mildly).
My advice is to just open the desired page in a browser and be done with it. There's going to be way too much effort (and, to be honest with you, way too less experience in regex for you) to even attempt to do otherwise.
EDIT: You could however try a limited approach to this, taking advantage of Google's work, balala's sample and my "improved" regex above to get a short "summary"/ "extract" of the linked page in tooltips (you can, of course, make it more elegant than this, although I doubt you'd need scrollbars for it):
I know it seems silly, why the heck would I want that. well it's simple. I am a huge sucker for my sexy UI and I will not compromise.the clunky nature of modern browsers is simply too disruptive to my workflow.
that being said, what I want IS possible. look at this lua script. (which I totally
snagged)
Code: Select all
local t = [[ some long html stuff in here ]]
local cleaner = {
{ "&", "&" },
{ "—", "-" },
{ "’", "'" },
{ " ", " " },
{ "<br.*/>", "\n" },
{ "</p>", "\n" },
{ "(%b<>)", "\n" },
{ "\n\n*", "\n" },
{ "\n*$", "" },
{ "^\n*", "" },
}
for i=1, #cleaner do
local cleans = cleaner[i]
t = string.gsub( t, cleans[1], cleans[2] )
end
print(t)
that'll get you a really nice clean Documentation page. especially if its from a docs website. try it on rainmeter. the only thing I can't seem to get rid of is the annoying meta tags. which only represents the top half of the page.
I have come to understand regex is really more of a word by word. nay, character by character parsing method, but there's gotta be a way to get rid of those meta tags. and then. once you get that .txt file.
parse it with the ever famed jsmorelys
lua scroll text
putting this together should theoretically be simple. but without a working example for parsing the actual web data I am left helpless in going any further.
thanks for all the interest guys, you have truly warmed my heart to this community and made me feel valued for my idea. but most of all for helping one of my dreams come true, even if it was for a brief moment.
here is a snapshot to illustrate both points in my post. the search no longer displays results. sad face
the lua scroll text combined with lua HTML cleaner is definitely a thing, albeit without styling, happy face