Search parameters are passed in URL (typically in a query string), results are returned in a response body:
GET /api/entity/?<search parameters>
---
200 Ok
{
	"entity": [
		{ ... },
		{ ... },
		...
	]
}
Search parameters are passed in a request body. Essentially, in REST lingo, the POST request has a meaning "create me a [transient] snapshot/index of items satisfying given conditions".
Results may be returned in two ways, depending on whether search results are transient (for quick "fire-and-forget" searches), or more persistent (for potentially long-running or data-heavy searches):
The server creates a search job, immediately performs it, returns the result set and discard it:
--->
POST /api/search?type=entity
{
	<search parameters>
}
<---
200 Ok
{
	"entity": [
		{ ... },
		{ ... },
		...
	]
}
The server creates a search job, starts it and returns a search job unique address in the standard Location header. After that the client may request job results and/or status using the returned URL, as well as cancel the search job and/or delete results:
--->
POST /api/search?type=entity
{
	<search parameters>
}
<---
201 Created
Location: /api/search/{searchResultId}
--->
GET /api/search/{searchResultId}/status
<---
200 Ok
{
	"status": "running",
	"progress": "69%"
}
--->
GET /api/search/{searchResultId}/status
<---
200 Ok
{
	"status": "complete",
	"progress": "100%"
}
--->
GET /api/search/{searchResultId}
<---
200 Ok
{
	"entity": [
		{ ... },
		{ ... },
		...
	]
}
--->
DELETE /api/search/{searchResultId}
<---
200 Ok
Note: here we used
/searchin URL as a noun representing an entity "a search job".
The server may store results in a sesion or in a persistent storage for some limited amount of time enough for the client to retrieve it. The server may also implement caching, returning the same searchResultId for the identical query parameters (if data was not changed), so that repetitive requests do not cause multiple searches.
Contrary to a popular opinion, using POST for searches is a totally RESFful approach; on the other hand, using GET may violate it, for a sake of some benefits.
REST recommends to use POST to create a resource, and use GET to retrieve a resource. Now it all depends on what one consider as a "resource". In REST, a resource is anything that has a unique address. For example, /api/{entity}/{id} is a resouce, and it totally makes sense for the most of developers.
But is /car/search?q=<query string> or /car/?q=<query string> a resource? It may be arguable, because nobody ever created it, e.g. submitted POST /car/search?q=<query string> or POST /car/?q=<query string>. There is no such entity as the car/search. One may think of it as a "virtual resource" created on the fly and discarded, but GET should not create new resources, it is a job for POST. And it seems like the search word is used here as a verb, turning the request into RPC-like one.
Of course, creating a "virtual resource" does not change an application state, it just returns a representation, a slice of the state, so techically it is still RESTful, but only if all searches are synchronous and search result sets are transient and discarded immediately. As soon as you intorduce asynchrony, the application state will have to include a collection of running search jobs and result sets waiting for retrieval, so using GET becomes not so valid because it causes observable side-effects.
- Hypertext. The search URL can be bookmarked, copied/pasted from/to a browser address line, saved in another document as a hyperlink.
- History. Search URLs will work with a browser history
- Caching. Search results may be cached, reducting server load
- Caching. GET requests can be agressively cached by a middleware (which may totally ignore your Cache-Controlheaders), so there is a risk for users to receive expired results. There is a workaround in a form of a fake random query parameter (hash or timestamp) which makes every query URL unique and bust the cache, but it is clear hack which should not appear in a good design.
- Max URL length. GET search requests can use only URL to pass search parametes (cannot use body in GET!). Browsers and middleware limit URLs length, and complex queries may hit the limit, especially if some query parameters contain machine-genearated data (e.g. another URL).
- Security. URLs are passed in clear text and are not encrypted by SSL/TLS, so there is a risk of leaking sensitive data. Even apparently benign data may cause an unintended leak if the adversary manage to correlate this data and deduce the rest. Every change in the query format may require a complex security analysis to ensure no sensitive data leaks are introduced.
- Poor support for heavy search jobs. Some search queries may take minutes/hour/days to complete, or may be very data-heavy so that retrieval of the result will require multiple requests. GET may timeout on such requests, and getting results may require implementing a complex polling scheme. With GET you have no notion of a "search job" resource which you can query for status, progress etc, so you may need to jump through hoops to simulate this (e.g. passing job id/status/progress via custom headers, etc). With POST you'll get this more naturally, because POST may create a new resource "a search job" with its own unique address, and you can do all kind of inquires about it.
- there is absolutely no danger to leak sensitive data via URL, and you can prove it.
- your client can tolerate receiving expired results, and benefits of result caching outweigh this. For example, it is totally fine to get a bit older Google search result, hence it is ok to use GET. But sometimes users must always get most recent and actual data;
- your clients require actual results, but you fully control all the middleware between the client and the service (e.g. you web service has intranet clients only);
- benefits of bookmarking/history outweigh an impact of receiving expired results;
- there is no danger to hit a URL length limit, e.g. when query data is typed by users (they usually do not type a lot). If your query string may contain a machine-generated data which has no lenght contraints, consider to use POST instead. For example, some search query may need to pass a third-party URL as parameter, but the URL may legitimately take a length close to a maximum, leaving you no margin (remember you must also urlencode it!); thus you may get an error or, worse, a silent URL truncation by a rogue middleware and a hard-to-repoduce bug.
- there is a [unknown] risk of leaking sensitive data via URL
- your client is sensitive to getting the most recent and actual data
- your query string may be very complex, so you need a contrived query string syntax
- your query parameters may contain machine-generated data (e.g. a third-party URL) with a potentially unrestricted length, making you exceed your own URL max length.
- your query may be CPU- or data-heavy, and you need to introduce some asynchrony to retrieve the result set.
Many search query protocols/engines/mddleware allow to use either GET or POST or both. Implement both and let your client choose which one to use.