It's powered by the GPT-4 model family and is planned to integrate into ChatGPT at some point in the future. SearchGPT is OpenAI’s long-awaited answer to...
No, its fancy autocomplete at a huge scale. Sometimes it returns correct answers.
A search engine should be taking a list of websites and metadata about those websites and returning results based on some ranking with the original desire being to get you what you wanted. (The current desire is just how much money can be extracted from your hands on the keys)
No. ChatGPT pulls information out of its ass and how I read it SearchGPT actually links to sources (while also summarizing it and pulling information out of it’s ass, presumably). ChatGPT “knows” things and SearchGPT should actually look stuff up and present it to you.
Kagi supports this since a while. You can end your query with a question mark to request a “quick answer” generated using an llm, complete with sources and citations. It’s surprisingly accurate and useful!
From the train dataset that was frozen many years ago. It’s like you know something instead of looking it up. It doesn’t provide sources, it just makes shit up based on what was in the (old) dataset. That’s totally different than looking up the information based on what you know and then using the new information to create an informed answer backed up by sources
deleted by creator
No, its fancy autocomplete at a huge scale. Sometimes it returns correct answers.
A search engine should be taking a list of websites and metadata about those websites and returning results based on some ranking with the original desire being to get you what you wanted. (The current desire is just how much money can be extracted from your hands on the keys)
No. ChatGPT pulls information out of its ass and how I read it SearchGPT actually links to sources (while also summarizing it and pulling information out of it’s ass, presumably). ChatGPT “knows” things and SearchGPT should actually look stuff up and present it to you.
Kagi supports this since a while. You can end your query with a question mark to request a “quick answer” generated using an llm, complete with sources and citations. It’s surprisingly accurate and useful!
deleted by creator
From the train dataset that was frozen many years ago. It’s like you know something instead of looking it up. It doesn’t provide sources, it just makes shit up based on what was in the (old) dataset. That’s totally different than looking up the information based on what you know and then using the new information to create an informed answer backed up by sources
It is but its not updated in real-time unlike searchgpt