Tomcat原始碼解析:Web請求處理過程
前言:
Catalina是Tomcat提供的Servlet容器實現,它負責處理來自客戶端的請求並處理響應。
但是僅有Servlet容器伺服器是無法對外提供服務的,還需要由聯結器接收來自客戶端的請求,並按照既定協議進行解析,然後交由Servlet容器處理
1.Coyote
Coyote是Tomcat聯結器框架的名稱。客戶端通過Coyote與伺服器建立連線、傳送請求並接收響應
Coyote封裝了底層的網路通訊,為Catalina容器提供了統一的介面,使Catalina與具體的請求協議及IO方式解耦。
1)Coyote支援的傳輸協議
* HTTP/1.1 主要用於Tomcat單獨執行的情況
* AJP 用於和web伺服器(Apache HTTP Server)整合,以實現針對靜態資源的優化以及叢集部署
* HTTP/2.0 下一代HTTP協議,自Tomcat8.5以及9.0版本開始支援
2)Coyote按照IO方式不同提供不同的方法
* NIO
* NIO2
* APR(Apache Portable Runtime)
2.Coyote框架主要實現類
先複用一張別人的圖來說明請求的處理過程,所涉及的節點(來自https://blog.csdn.net/xlgen157387/article/details/79006434 )
這裡先直接寫實現步驟,下面會用原始碼分析的方式來確定其過程
1)Endpoint 具體的Socket接收處理類,是對傳輸層的抽象。
2)Processor
3)Adapter 將請求適配到Servlet容器進行具體的處理
從以上的圖可以直觀的看一下一個請求真正到具體的Container要經過的步驟,下面就從原始碼角度來分析一下這張圖
3.Connector的結構分析
1)Connection的結構
Connection主要原始碼如下:
public class Connector extends LifecycleMBeanBase {
protected Service service = null;
// 預設的ProtocolHandler實現
protected String protocolHandlerClassName = "org.apache.coyote.http11.Http11NioProtocol";
protected final ProtocolHandler protocolHandler;
protected Adapter adapter = null;
@Override
protected void startInternal() throws LifecycleException {
// Validate settings before starting
if (getPort() < 0) {
throw new LifecycleException(sm.getString(
"coyoteConnector.invalidPort", Integer.valueOf(getPort())));
}
setState(LifecycleState.STARTING);
try {
// 啟動protocolHandler
protocolHandler.start();
} catch (Exception e) {
String errPrefix = "";
if(this.service != null) {
errPrefix += "service.getName(): \"" + this.service.getName() + "\"; ";
}
throw new LifecycleException
(errPrefix + " " + sm.getString
("coyoteConnector.protocolHandlerStartFailed"), e);
}
}
}
由上可知,ProtocolHandler的預設實現為Http11NioProtocol
2)Connector被建立的時機
在Catalina解析server.xml時,也就是在Catalina.createStartDigester()方法中,有以下程式碼
digester.addRule("Server/Service/Connector",
new ConnectorCreateRule());
digester.addRule("Server/Service/Connector",
new SetAllPropertiesRule(new String[]{"executor", "sslImplementationName"}));
digester.addSetNext("Server/Service/Connector",
"addConnector",
"org.apache.catalina.connector.Connector");
可以看到,Connector就是再Catalina解析的server.xml的時候建立的
3)Connection.start()啟動Connection包含的一系列元件
Connection在Catalina解析server.xml時候被建立,那麼Connection以及Connection包含的元件是什麼時候被啟動的呢?
我們知道Connection屬於Service,Service屬於Server,Server在解析後建立啟動,同時也通過呼叫其start()方法啟動了Service、Connection
由1)可知,Connection.start()呼叫了ProtocolHandler.start()方法,下面我們來看下這個start方法
4.Http11NioProtocol原始碼分析
類結構如上圖所示
1)主要成員變數分析
存在AbstractProtocol類中,如下所示:
public abstract class AbstractProtocol<S> implements ProtocolHandler,
MBeanRegistration {
/**
* Endpoint that provides low-level network I/O - must be matched to the
* ProtocolHandler implementation (ProtocolHandler using NIO, requires NIO
* Endpoint etc.).
*/
private final AbstractEndpoint<S> endpoint;
/**
* The adapter provides the link between the ProtocolHandler and the
* connector.
*/
protected Adapter adapter;
private final Set<Processor> waitingProcessors =
Collections.newSetFromMap(new ConcurrentHashMap<Processor, Boolean>());
/**
* Create and configure a new Processor instance for the current protocol
* implementation.
*
* @return A fully configured Processor instance that is ready to use
*/
protected abstract Processor createProcessor();
通過原始碼,我們更驗證了上圖的正確性
ProtocolHandler包含了Endpoint、Processor、Adapter,三個類各司其職,共同完成請求的解析、封裝、傳遞
2)ProtocolHandler.start()啟動方法
下面我們來看下Handler啟動時完成哪些動作
// 具體實現在AbstractProtocol.start()方法中
@Override
public void start() throws Exception {
if (getLog().isInfoEnabled())
getLog().info(sm.getString("abstractProtocolHandler.start",
getName()));
try {
// 主要就是啟動endpoint
endpoint.start();
} catch (Exception ex) {
getLog().error(sm.getString("abstractProtocolHandler.startError",
getName()), ex);
throw ex;
}
...
}
由上可知:ProtocolHandler啟動時主要完成的就是endpoint的啟動
5.Endpoint.start()
Tomcat中沒有Endpoint,預設使用的是AbstractEndpoint
// AbstractEndpoint.start()
public final void start() throws Exception {
if (bindState == BindState.UNBOUND) {
// bind方法主要就是建立ServerSocket,關聯到使用者指定的地址
bind();
bindState = BindState.BOUND_ON_START;
}
// startInternal是一個抽象方法,預設實現在其子類中
startInternal();
}
我們當前使用的AbstractEndpoint實現類是NioEndpoint,其startInternal方法如下:
/**
* Start the NIO endpoint, creating acceptor, poller threads.
*/
@Override
public void startInternal() throws Exception {
if (!running) {
running = true;
paused = false;
processorCache = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE,
socketProperties.getProcessorCache());
eventCache = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE,
socketProperties.getEventCache());
nioChannels = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE,
socketProperties.getBufferPool());
// 1.建立Executor
if ( getExecutor() == null ) {
createExecutor();
}
// 2.建立最大連線數限制類
initializeConnectionLatch();
// 3.建立Poller,並啟動執行緒執行Poller任務
pollers = new Poller[getPollerThreadCount()];
for (int i=0; i<pollers.length; i++) {
pollers[i] = new Poller();
Thread pollerThread = new Thread(pollers[i], getName() + "-ClientPoller-"+i);
pollerThread.setPriority(threadPriority);
pollerThread.setDaemon(true);
pollerThread.start();
}
// 4.建立Acceptor並啟動
startAcceptorThreads();
}
}
下面我們逐個來分析上面四個方法
1)createExecutor()主要就是建立一個執行緒池
// AbstractEndpoint.createExecutor()
public void createExecutor() {
internalExecutor = true;
TaskQueue taskqueue = new TaskQueue();
TaskThreadFactory tf = new TaskThreadFactory(getName() + "-exec-", daemon, getThreadPriority());
executor = new ThreadPoolExecutor(getMinSpareThreads(), getMaxThreads(), 60, TimeUnit.SECONDS,taskqueue, tf);
taskqueue.setParent( (ThreadPoolExecutor) executor);
}
2)initializeConnectionLatch()建立最大連線數限制類
// AbstractEndpoint.initializeConnectionLatch()
protected LimitLatch initializeConnectionLatch() {
if (maxConnections==-1) return null;
if (connectionLimitLatch==null) {
// 主要就是建立LimitLatch
// 通過檢視LimitLatch的原始碼可知,其就是一個最大連線數的限制類
connectionLimitLatch = new LimitLatch(getMaxConnections());
}
return connectionLimitLatch;
}
3)Poller
4)startAcceptorThreads()同Poller一樣,建立一定數量的Acceptor,並建立執行緒啟動Acceptor任務
// AbstractEndpoint.startAcceptorThreads()
protected final void startAcceptorThreads() {
int count = getAcceptorThreadCount();
acceptors = new Acceptor[count];
for (int i = 0; i < count; i++) {
acceptors[i] = createAcceptor();
String threadName = getName() + "-Acceptor-" + i;
acceptors[i].setThreadName(threadName);
Thread t = new Thread(acceptors[i], threadName);
t.setPriority(getAcceptorThreadPriority());
t.setDaemon(getDaemon());
t.start();
}
}
有關於Poller和Acceptor我們單獨分析
我們先來看Acceptor
6.NioEndpoint.Acceptor(接收客戶端連線,並註冊)
Acceptor原始碼如下
是一個抽象類,實現Runnable介面,沒有實現run方法,具體在子類中實現
public abstract static class Acceptor implements Runnable {
public enum AcceptorState {
NEW, RUNNING, PAUSED, ENDED
}
protected volatile AcceptorState state = AcceptorState.NEW;
public final AcceptorState getState() {
return state;
}
private String threadName;
protected final void setThreadName(final String threadName) {
this.threadName = threadName;
}
protected final String getThreadName() {
return threadName;
}
}
Acceptor有三個具體實現類,我們來看下NioEndpoint.Acceptor
protected class Acceptor extends AbstractEndpoint.Acceptor {
@Override
public void run() {
...
while (running) {
try {
//1.判斷是否超過最大連線數,超過則等待
countUpOrAwaitConnection();
SocketChannel socket = null;
try {
// 2.傳統的獲取客戶端socket連線方式
socket = serverSock.accept();
} catch (IOException ioe) {
...
}
...
// Configure the socket
if (running && !paused) {
// 3.將socket封裝,重點就在這個方法裡
if (!setSocketOptions(socket)) {
closeSocket(socket);
}
...
}
...
}
// NioEndpoint.setSocketOptions()
protected boolean setSocketOptions(SocketChannel socket) {
// Process the connection
try {
socket.configureBlocking(false);
// 1.獲取socket
Socket sock = socket.socket();
socketProperties.setProperties(sock);
// 2.NioChannel本質上是 SocketChannel,
NioChannel channel = nioChannels.pop();
if (channel == null) {
SocketBufferHandler bufhandler = new SocketBufferHandler(
socketProperties.getAppReadBufSize(),
socketProperties.getAppWriteBufSize(),
socketProperties.getDirectBuffer());
if (isSSLEnabled()) {
channel = new SecureNioChannel(socket, bufhandler, selectorPool, this);
} else {
channel = new NioChannel(socket, bufhandler);
}
} else {
channel.setIOChannel(socket);
channel.reset();
}
// 3.註冊到Poller,關鍵的一步
// getPoller0()方法輪詢的辦法從Poller[]陣列中獲取一個Poller,並將NioChannel註冊其中
getPoller0().register(channel);
} catch (Throwable t) {
ExceptionUtils.handleThrowable(t);
try {
log.error("",t);
} catch (Throwable tt) {
ExceptionUtils.handleThrowable(tt);
}
// Tell to close the socket
return false;
}
return true;
}
// Poller.register(channel)
// NioEndpoint.Poller.register(channel)
public void register(final NioChannel socket) {
socket.setPoller(this);
NioSocketWrapper ka = new NioSocketWrapper(socket, NioEndpoint.this);
socket.setSocketWrapper(ka);
ka.setPoller(this);
ka.setReadTimeout(getSocketProperties().getSoTimeout());
ka.setWriteTimeout(getSocketProperties().getSoTimeout());
ka.setKeepAliveLeft(NioEndpoint.this.getMaxKeepAliveRequests());
ka.setSecure(isSSLEnabled());
ka.setReadTimeout(getSoTimeout());
ka.setWriteTimeout(getSoTimeout());
// 1.獲取一個PollerEvent
PollerEvent r = eventCache.pop();
// 2.對NioSocketWrapper註冊讀事件
ka.interestOps(SelectionKey.OP_READ);
// 3.將socket繫結到PollerEvent事件中
if ( r==null) r = new PollerEvent(socket,ka,OP_REGISTER);
else r.reset(socket,ka,OP_REGISTER);
// 4.將繫結後的事件新增到Poller的events中
addEvent(r);
}
到這裡,我們先總結一下Acceptor所做的工作:
1)作為一個獨立執行緒,在run方法中無限輪詢,檢查客戶端連線
2)一旦有客戶端連線進來,則將對應socket封裝為NioChannel
3)將NioChannel繫結到一個PollerEvent事件中
4)將對應的PollerEvent方法Poller的events中,等待執行
7.NioEndpoint.PollerEvent
上面說到將NioChannel繫結到一個PollerEvent中,並將PollerEvent放入events中,我們先來看到PollerEvent
public static class PollerEvent implements Runnable {
private NioChannel socket;
private int interestOps;
private NioSocketWrapper socketWrapper;
...
@Override
public void run() {
// 1.如果是註冊事件,則將對應的SocketChannel註冊到Selector上,並監聽READ事件
// 可以對照剛才的Poller.register(NioChannel)方法,註冊的就是OP_REGISTER事件
// 並將socketWrapper作為attachment
if (interestOps == OP_REGISTER) {
try {
socket.getIOChannel().register(
socket.getPoller().getSelector(), SelectionKey.OP_READ, socketWrapper);
} catch (Exception x) {
log.error(sm.getString("endpoint.nio.registerFail"), x);
}
} else {
final SelectionKey key = socket.getIOChannel().keyFor(socket.getPoller().getSelector());
try {
if (key == null) {
// The key was cancelled (e.g. due to socket closure)
// and removed from the selector while it was being
// processed. Count down the connections at this point
// since it won't have been counted down when the socket
// closed.
socket.socketWrapper.getEndpoint().countDownConnection();
} else {
// 2.TODO
final NioSocketWrapper socketWrapper = (NioSocketWrapper) key.attachment();
if (socketWrapper != null) {
//we are registering the key to start with, reset the fairness counter.
int ops = key.interestOps() | interestOps;
socketWrapper.interestOps(ops);
key.interestOps(ops);
} else {
socket.getPoller().cancelledKey(key);
}
}
} catch (CancelledKeyException ckx) {
try {
socket.getPoller().cancelledKey(key);
} catch (Exception ignore) {}
}
}
}
}
總結:主要是給所有的客戶端連線監聽READ事件
8.NioEndpoint.Poller(關鍵節點,監聽客戶端事件,並將事件傳遞給Process處理)
public class Poller implements Runnable {
private Selector selector;
private final SynchronizedQueue<PollerEvent> events =
new SynchronizedQueue<>();
// 獲取所有的事件,並逐個執行
public boolean events() {
boolean result = false;
PollerEvent pe = null;
while ( (pe = events.poll()) != null ) {
result = true;
try {
// 處理完成之後,清空event事件,供下一次註冊使用
pe.run();
pe.reset();
if (running && !paused) {
eventCache.push(pe);
}
} catch ( Throwable x ) {
log.error("",x);
}
}
return result;
}
@Override
public void run() {
// Loop until destroy() is called
while (true) {
boolean hasEvents = false;
// 1.執行events中所有的事件,並將客戶端連線註冊到Selector上,監聽對應的READ事件
try {
if (!close) {
hasEvents = events();
if (wakeupCounter.getAndSet(-1) > 0) {
//if we are here, means we have other stuff to do
//do a non blocking select
keyCount = selector.selectNow();
} else {
keyCount = selector.select(selectorTimeout);
}
wakeupCounter.set(0);
}
...
} catch (Throwable x) {
}
//either we timed out or we woke up, process events first
if ( keyCount == 0 ) hasEvents = (hasEvents | events());
Iterator<SelectionKey> iterator =
keyCount > 0 ? selector.selectedKeys().iterator() : null;
// 2.監聽到客戶端事件,
while (iterator != null && iterator.hasNext()) {
SelectionKey sk = iterator.next();
NioSocketWrapper attachment = (NioSocketWrapper)sk.attachment();
// Attachment may be null if another thread has called
// cancelledKey()
if (attachment == null) {
iterator.remove();
} else {
iterator.remove();
// 3.最終交由processKey處理對應的客戶端事件
processKey(sk, attachment);
}
}//while
//process timeouts
timeout(keyCount,hasEvents);
}//while
getStopLatch().countDown();
}
// NioEndpoint.Poller.processKey(sk, attachment)
// 處理客戶端事件
protected void processKey(SelectionKey sk, NioSocketWrapper attachment) {
try {
if ( close ) {
cancelledKey(sk);
} else if ( sk.isValid() && attachment != null ) {
if (sk.isReadable() || sk.isWritable() ) {
if ( attachment.getSendfileData() != null ) {
processSendfile(sk,attachment, false);
} else {
unreg(sk, attachment, sk.readyOps());
boolean closeSocket = false;
// 1.處理讀事件
if (sk.isReadable()) {
if (!processSocket(attachment, SocketEvent.OPEN_READ, true)) {
closeSocket = true;
}
}
// 2.處理寫事件
if (!closeSocket && sk.isWritable()) {
if (!processSocket(attachment, SocketEvent.OPEN_WRITE, true)) {
closeSocket = true;
}
}
if (closeSocket) {
cancelledKey(sk);
}
}
}
...
}
// AbstractEndpoint.processSocket()
public boolean processSocket(SocketWrapperBase<S> socketWrapper,SocketEvent event, boolean dispatch) {
try {
if (socketWrapper == null) {
return false;
}
SocketProcessorBase<S> sc = processorCache.pop();
// 1.將socketWrapper繫結到SocketProcessor上
if (sc == null) {
sc = createSocketProcessor(socketWrapper, event);
} else {
sc.reset(socketWrapper, event);
}
// 2.獲取執行緒池,並執行SocketProcessor任務,如果沒有執行緒池,則直接執行
Executor executor = getExecutor();
if (dispatch && executor != null) {
executor.execute(sc);
} else {
sc.run();
}
...
return true;
}
總結:到這裡為止,我們已經完成了第一階段,主要的處理類是Endpoint,主要工作:
1)NioEndpoint啟動,監聽對應的host:port
2)NioEndpoint.Acceptor接收客戶端連線請求,並將對應的Socket封裝為一個PollerEvent,放入Poller中
3)PollerEvent重新監聽客戶端的READ事件
4)Poller主要是將客戶端Socket傳遞給Processor進行處理
9.SocketProcessor(第二階段,SocketProcessor將Socket請求轉交給Processor)
1)SocketProcessor.run()
protected class SocketProcessor extends SocketProcessorBase<NioChannel> {
public SocketProcessor(SocketWrapperBase<NioChannel> socketWrapper, SocketEvent event) {
super(socketWrapper, event);
}
@Override
protected void doRun() {
NioChannel socket = socketWrapper.getSocket();
SelectionKey key = socket.getIOChannel().keyFor(socket.getPoller().getSelector());
try {
int handshake = -1;
try {
if (key != null) {
// 1.NioChannel.isHandshakeComplete()預設為true
if (socket.isHandshakeComplete()) {
// No TLS handshaking required. Let the handler
// process this socket / event combination.
handshake = 0;
}
...
}
}
...
if (handshake == 0) {
SocketState state = SocketState.OPEN;
// event為空時,則預設為讀事件
if (event == null) {
state = getHandler().process(socketWrapper, SocketEvent.OPEN_READ);
} else {
// 交由AbstractProtocol.ConnectionHandler處理(關鍵步驟)
state = getHandler().process(socketWrapper, event);
}
if (state == SocketState.CLOSED) {
close(socket, key);
}
} else if (handshake == -1 ) {
close(socket, key);
} else if (handshake == SelectionKey.OP_READ){
socketWrapper.registerReadInterest();
} else if (handshake == SelectionKey.OP_WRITE){
socketWrapper.registerWriteInterest();
}
} catch (CancelledKeyException cx) {
...
}
...
}
}
2)AbstractProtocol.ConnectionHandler.process(socketWrapper, event)這一步會將處理交由Processor處理
@Override
public SocketState process(SocketWrapperBase<S> wrapper, SocketEvent status) {
if (getLog().isDebugEnabled()) {
getLog().debug(sm.getString("abstractConnectionHandler.process",
wrapper.getSocket(), status));
}
if (wrapper == null) {
// Nothing to do. Socket has been closed.
return SocketState.CLOSED;
}
S socket = wrapper.getSocket();
// 1.獲取對應的Processor
Processor processor = connections.get(socket);
if (getLog().isDebugEnabled()) {
getLog().debug(sm.getString("abstractConnectionHandler.connectionsGet",
processor, socket));
}
...
try {
...
// 2.如果對應的Processor為空,則各種方式獲取,不行的話,最後再新建
if (processor == null) {
processor = recycledProcessors.pop();
if (getLog().isDebugEnabled()) {
getLog().debug(sm.getString("abstractConnectionHandler.processorPop",
processor));
}
}
if (processor == null) {
processor = getProtocol().createProcessor();
register(processor);
}
...
connections.put(socket, processor);
SocketState state = SocketState.CLOSED;
do {
// 3.處理客戶端事件
// 主要關注下讀事件,如果是讀事件,則此時status=SocketEvent.OPEN_READ
state = processor.process(wrapper, status);
if (state == SocketState.UPGRADING) {
...
// 關於升級Protocol的一系列處理
}
}
} while ( state == SocketState.UPGRADING);
// 下面就是一些關於寫事件、連線事件等操作,不再細看
if (state == SocketState.LONG) {
// In the middle of processing a request/response. Keep the
// socket associated with the processor. Exact requirements
// depend on type of long poll
longPoll(wrapper, processor);
if (processor.isAsync()) {
getProtocol().addWaitingProcessor(processor);
}
} else if (state == SocketState.OPEN) {
// In keep-alive but between requests. OK to recycle
// processor. Continue to poll for the next request.
connections.remove(socket);
release(processor);
wrapper.registerReadInterest();
}
...
return state;
} catch(java.net.SocketException e) {
// SocketExceptions are normal
getLog().debug(sm.getString(
"abstractConnectionHandler.socketexception.debug"), e);
}
...
// Make sure socket/processor is removed from the list of current
// connections
connections.remove(socket);
release(processor);
return SocketState.CLOSED;
}
3)AbstractProcessorLight.process(wrapper, status)處理
@Override
public SocketState process(SocketWrapperBase<?> socketWrapper, SocketEvent status)
throws IOException {
SocketState state = SocketState.CLOSED;
Iterator<DispatchType> dispatches = null;
do {
...
} else if (status == SocketEvent.OPEN_WRITE) {
// Extra write event likely after async, ignore
state = SocketState.LONG;
} else if (status == SocketEvent.OPEN_READ){
// 重點關注讀事件,就在這裡
// service方法是一個抽象方法,由子類負責實現
// 本例中我們選擇看Http11Processor子類的實現,繼續在下面分析
state = service(socketWrapper);
} else {
// Default to closing the socket if the SocketEvent passed in
// is not consistent with the current state of the Processor
state = SocketState.CLOSED;
}
...
} while (state == SocketState.ASYNC_END ||
dispatches != null && state != SocketState.CLOSED);
return state;
}
總結:
SocketProcessor將事件分類後,交由Processor處理
後面Processor會將請求交由Adapter處理
10.Http11Processor.service(socketWrapper)(第三階段:Processor封裝Endpoint接收到的Socket為Request)
@Override
public SocketState service(SocketWrapperBase<?> socketWrapper)
throws IOException {
RequestInfo rp = request.getRequestProcessor();
rp.setStage(org.apache.coyote.Constants.STAGE_PARSE);
...
while (!getErrorState().isError() && keepAlive && !isAsync() && upgradeToken == null &&
sendfileState == SendfileState.DONE && !endpoint.isPaused()) {
// Parsing the request header
...
// Has an upgrade been requested?
Enumeration<String> connectionValues = request.getMimeHeaders().values("Connection");
boolean foundUpgrade = false;
while (connectionValues.hasMoreElements() && !foundUpgrade) {
foundUpgrade = connectionValues.nextElement().toLowerCase(
Locale.ENGLISH).contains("upgrade");
}
...
// 1.關鍵的業務處理就在這裡
if (!getErrorState().isError()) {
try {
rp.setStage(org.apache.coyote.Constants.STAGE_SERVICE);
// 將request交由Adapter處理
getAdapter().service(request, response);
...
} catch (InterruptedIOException e) {
setErrorState(ErrorState.CLOSE_CONNECTION_NOW, e);
}
...
}
...
}
11.Adapter.service(request,response)(第四階段:Adapter將請求交由具體的Container處理)
Adapter是介面,具體的實現類是CoyoteAdapter,下面看下其service方法實現
@Override
public void service(org.apache.coyote.Request req, org.apache.coyote.Response res)
throws Exception {
Request request = (Request) req.getNote(ADAPTER_NOTES);
Response response = (Response) res.getNote(ADAPTER_NOTES);
// 1.將request、response轉換為符合Servlet規範的請求響應
if (request == null) {
// Create objects
request = connector.createRequest();
request.setCoyoteRequest(req);
response = connector.createResponse();
response.setCoyoteResponse(res);
// Link objects
request.setResponse(response);
response.setRequest(request);
// Set as notes
req.setNote(ADAPTER_NOTES, request);
res.setNote(ADAPTER_NOTES, response);
// Set query string encoding
req.getParameters().setQueryStringEncoding(connector.getURIEncoding());
}
if (connector.getXpoweredBy()) {
response.addHeader("X-Powered-By", POWERED_BY);
}
boolean async = false;
boolean postParseSuccess = false;
req.getRequestProcessor().setWorkerThreadName(THREAD_NAME.get());
try {
// 2.轉換請求引數並完成請求對映
// 這裡會將請求對映到一個具體的Wrapper
postParseSuccess = postParseRequest(req, request, res, response);
if (postParseSuccess) {
//check valves if we support async
request.setAsyncSupported(
connector.getService().getContainer().getPipeline().isAsyncSupported());
// 3.得到Container中第一個Valve,執行其invoke方法,這個valve是責任鏈模式,會接連執行以下的valve
// 完成客戶端請求
connector.getService().getContainer().getPipeline().getFirst().invoke(
request, response);
}
...
} catch (IOException e) {
// Ignore
} finally {
...
}
}
1)postParseRequest(req, request, res, response)完成請求對映
這是一個非常複雜的方法,筆者也很繞,細節特別多,在這就不細述。
只需要知道根據客戶端請求路徑對映到一個具體的有效的Wrapper。對映結果會儲存在MappingData中
2)connector.getService().getContainer().getPipeline().getFirst().invoke(request, response)獲取當前Engine的第一個Valve並執行,完成客戶端請求
到這一步,客戶端請求就轉換為具體的Servlet並執行其service方法,返回響應即結束本次會話
有關於Pipeline和Valve的內容筆者會新開一篇部落格來介紹。
總結:
有關於web請求的內容到這裡就結束了,下面來總結下請求的整個過程,實際也就是下圖
讀者可按照該圖再回憶一下我們上面的分析過程。
參考:Tomcat架構解析(劉光瑞)
參考部落格: https://blog.csdn.net/xlgen157387/article/details/79006434