shell bypass 403

UnknownSec Shell

: /usr/lib64/python2.7/ [ drwxr-xr-x ]

name : urlparse.pyo
�
zfc@s&dZddlZddlZddddddd	d
gZddd
ddddddddddddddgZddd
ddddddddddddddddddd d!gZdd"dddddddd#d$dddd%gZd
d"d&d'ddddd#d$g
Zddddddd
ddd#d$dgZdd"dd
d'ddddddddg
Z	d(Z
d)Zd*d+d,gZd-Z
iZd.�Zd/efd0��YZdd1lmZd2ed2d3�efd4��YZd5ed5d6�efd7��YZded8�Zd9�Zd:d;�Zd<�Zd=�Zded>�Zd?�Zd@�ZedA�ZdB�Z ye!Wne"k
r�dC�Z#n
XdD�Z#dEZ$e%dF�e$D��Z&ej'dG�Z(dH�Z)d:d:dddI�Z+dJe,fdK��YZ-dLZ.da/d:d:dddM�Z0dS(Ns3Parse (absolute and relative) URLs.

urlparse module is based upon the following RFC specifications.

RFC 3986 (STD66): "Uniform Resource Identifiers" by T. Berners-Lee, R. Fielding
and L.  Masinter, January 2005.

RFC 2732 : "Format for Literal IPv6 Addresses in URL's by R.Hinden, B.Carpenter
and L.Masinter, December 1999.

RFC 2396:  "Uniform Resource Identifiers (URI)": Generic Syntax by T.
Berners-Lee, R. Fielding, and L. Masinter, August 1998.

RFC 2368: "The mailto URL scheme", by P.Hoffman , L Masinter, J. Zwinski, July 1998.

RFC 1808: "Relative Uniform Resource Locators", by R. Fielding, UC Irvine, June
1995.

RFC 1738: "Uniform Resource Locators (URL)" by T. Berners-Lee, L. Masinter, M.
McCahill, December 1994

RFC 3986 is considered the current standard and any future changes to
urlparse module should conform with it.  The urlparse module is
currently not entirely compliant with this RFC due to defacto
scenarios for parsing, and for backward compatibility purposes, some
parsing quirks from older RFCs are retained. The testcases in
test_urlparse.py provides a good indicator of parsing behavior.

The WHATWG URL Parser spec should also be considered.  We are not compliant with
it either due to existing user code API behavior expectations (Hyrum's Law).
It serves as a useful guide when making changes.

i����Nturlparset
urlunparseturljoint	urldefragturlsplitt
urlunsplittparse_qst	parse_qsltftpthttptgophertnntptimaptwaistfilethttpstshttptmmstprosperotrtsptrtsputtsftptsvnssvn+sshttelnettsnewstrsynctnfstgitsgit+sshthdltsiptsipstteltmailtotnewssAabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789+-.s!	

 s	s
s
icCstj�dS(sClear the parse cache.N(t_parse_cachetclear(((s /usr/lib64/python2.7/urlparse.pytclear_cachePstResultMixincBsJeZdZed��Zed��Zed��Zed��ZRS(s-Shared methods for the parsed result objects.cCsX|j}d|krT|jdd�d}d|krP|jdd�d}n|SdS(Nt@iit:(tnetloctrsplittsplittNone(tselfR)tuserinfo((s /usr/lib64/python2.7/urlparse.pytusernameXs	cCsR|j}d|krN|jdd�d}d|krN|jdd�dSndS(NR'iiR((R)R*R+R,(R-R)R.((s /usr/lib64/python2.7/urlparse.pytpasswordbs	cCs�|jjd�d}d|krId|krI|jd�ddj�Sd|krl|jd�dj�S|dkr|dS|j�SdS(	NR'i����t[t]iiR(R(R)R+tlowerR,(R-R)((s /usr/lib64/python2.7/urlparse.pythostnamekscCs�|jjd�djd�d}d|kr}|jd�d}|r}t|d�}d|kondknrz|Sq}ndS(	NR'i����R2R(ii
ii��(R)R+tintR,(R-R)tport((s /usr/lib64/python2.7/urlparse.pyR6ws#
(t__name__t
__module__t__doc__tpropertyR/R0R4R6(((s /usr/lib64/python2.7/urlparse.pyR&Us

	(t
namedtupletSplitResults!scheme netloc path query fragmentcBseZdZd�ZRS(cCs
t|�S(N(R(R-((s /usr/lib64/python2.7/urlparse.pytgeturl�s((R7R8t	__slots__R=(((s /usr/lib64/python2.7/urlparse.pyR<�stParseResults(scheme netloc path params query fragmentcBseZdZd�ZRS(cCs
t|�S(N(R(R-((s /usr/lib64/python2.7/urlparse.pyR=�s((R7R8R>R=(((s /usr/lib64/python2.7/urlparse.pyR?�scCsst|||�}|\}}}}}|tkrTd|krTt|�\}}nd}t||||||�S(s#Parse a URL into 6 components:
    <scheme>://<netloc>/<path>;<params>?<query>#<fragment>
    Return a 6-tuple: (scheme, netloc, path, params, query, fragment).
    Note that we don't break the components up in smaller bits
    (e.g. netloc is a single string) and we don't expand % escapes.t;R(Rtuses_paramst_splitparamsR?(turltschemetallow_fragmentsttupleR)tquerytfragmenttparams((s /usr/lib64/python2.7/urlparse.pyR�scCsed|kr@|jd|jd��}|dkrO|dfSn|jd�}|| ||dfS(Nt/R@iRi(tfindtrfind(RCti((s /usr/lib64/python2.7/urlparse.pyRB�s
icCsbt|�}x>dD]6}|j||�}|dkrt||�}qqW|||!||fS(Ns/?#i(tlenRKtmin(RCtstarttdelimtctwdelim((s /usr/lib64/python2.7/urlparse.pyt_splitnetloc�s
cCs�|st|t�rdSddl}|jdd�}|jdd�}|jdd�}|jdd�}|jd|�}||kr�dSx-dD]%}||kr�td	|��q�q�WdS(
Ni����u@uu:u#u?tNFKCs/?#@:s>netloc %r contains invalid characters under NFKC normalization(t
isinstancetunicodetunicodedatatreplacet	normalizet
ValueError(R)RXtntnetloc2RR((s /usr/lib64/python2.7/urlparse.pyt_checknetloc�s
cCs'x tD]}|j|d�}qW|S(NR(t_UNSAFE_URL_BYTES_TO_REMOVERY(RCtb((s /usr/lib64/python2.7/urlparse.pyt_remove_unsafe_bytes_from_url�s
cCs5t|�}t|�}|jt�}|jt�}t|�}|||t|�t|�f}tj|d�}|r|St	t�t
kr�t�nd}}}|jd�}|dkrJ|| dkr�|| j
�}||d}|d dkrYt|d�\}}d|kr/d	|ksGd	|krYd|krYtd
��qYn|r�d|kr�|jdd�\}}nd|kr�|jdd�\}}nt|�t|||||�}	|	t|<|	Sxj|| D]}
|
tkr�Pq�q�W||d}|s-td
�|D��rJ|| j
�|}}qJn|d dkr�t|d�\}}d|kr�d	|ks�d	|kr�d|kr�td
��q�n|r�d|kr�|jdd�\}}nd|kr|jdd�\}}nt|�t|||||�}	|	t|<|	S(sParse a URL into 5 components:
    <scheme>://<netloc>/<path>?<query>#<fragment>
    Return a 5-tuple: (scheme, netloc, path, query, fragment).
    Note that we don't break the components up in smaller bits
    (e.g. netloc is a single string) and we don't expand % escapes.RR(iR	iis//R1R2sInvalid IPv6 URLt#t?css|]}|dkVqdS(t
0123456789N((t.0RR((s /usr/lib64/python2.7/urlparse.pys	<genexpr>�sN(Ratlstript_WHATWG_C0_CONTROL_OR_SPACEtstriptboolttypeR#tgetR,RNtMAX_CACHE_SIZER%RKR3RTR[R+R^R<tscheme_charstany(RCRDREtkeytcachedR)RGRHRMtvRRtrest((s /usr/lib64/python2.7/urlparse.pyR�sb!




cCsJ|\}}}}}}|r1d||f}nt|||||f�S(s�Put a parsed URL back together again.  This may result in a
    slightly different, but equivalent URL, if the URL that was parsed
    originally had redundant delimiters, e.g. a ? with an empty query
    (the draft states that these are equivalent).s%s;%s(R(tdataRDR)RCRIRGRH((s /usr/lib64/python2.7/urlparse.pyR	scCs�|\}}}}}|s=|rw|tkrw|d dkrw|r`|d dkr`d|}nd|pld|}n|r�|d|}n|r�|d|}n|r�|d|}n|S(	skCombine the elements of a tuple as returned by urlsplit() into a
    complete URL as a string. The data argument can be any five-item iterable.
    This may result in a slightly different, but equivalent URL, if the URL that
    was parsed originally had unnecessary delimiters (for example, a ? with an
    empty query; the RFC states that these are equivalent).is//iRJRR(RcRb(tuses_netloc(RsRDR)RCRGRH((s /usr/lib64/python2.7/urlparse.pyRs(
cCsh|s
|S|s|St|d|�\}}}}}}t|||�\}	}
}}}
}|	|kst|	tkrx|S|	tkr�|
r�t|	|
|||
|f�S|}
n|d dkr�t|	|
|||
|f�S|r |r |}|}|
s|}
nt|	|
|||
|f�S|jd�d |jd�}|ddkr]d|d<nxd|kr||jd�q`Wxrd}t|�d}xU||kr�||dkr�||dd	kr�||d|d5Pn|d}q�WPq�W|ddgkrd|d<n2t|�dkrC|ddkrCdg|d)nt|	|
dj|�||
|f�S(
saJoin a base URL and a possibly relative URL to form an absolute
    interpretation of the latter.RiRJi����t.s..ii����(Rs..(Rt
uses_relativeRtRR+tremoveRNtjoin(tbaseRCREtbschemetbnetloctbpathtbparamstbqueryt	bfragmentRDR)tpathRIRGRHtsegmentsRMR\((s /usr/lib64/python2.7/urlparse.pyR%sX$$		 

"cCs`d|krRt|�\}}}}}}t|||||df�}||fS|dfSdS(s�Removes any existing fragment from URL.

    Returns a tuple of the defragmented URL and the fragment.  If
    the URL contained no fragments, the second element is the
    empty string.
    RbRN(RR(RCtsR\tptatqtfragtdefrag((s /usr/lib64/python2.7/urlparse.pyRYs

cCsdS(Ni((tx((s /usr/lib64/python2.7/urlparse.pyt_is_unicodejscCs
t|t�S(N(RVRW(R�((s /usr/lib64/python2.7/urlparse.pyR�mst0123456789ABCDEFabcdefccs?|]5}tD](}||tt||d��fVq
qdS(iN(t_hexdigtchrR5(ReR�R`((s /usr/lib64/python2.7/urlparse.pys	<genexpr>vss([-]+)cCsOt|�r�d|kr|Stj|�}|dg}|j}xUtdt|�d�D];}|tt||��jd��|||d�qZWdj	|�S|jd�}t|�dkr�|S|dg}|j}x^|dD]R}y$|t
|d �||d�Wq�tk
r=|d�||�q�Xq�Wdj	|�S(s"unquote('abc%20def') -> 'abc def'.t%iiitlatin1R(R�t_asciireR+tappendtrangeRNtunquotetstrtdecodeRxt	_hextochrtKeyError(R�tbitstresR�RMtitem((s /usr/lib64/python2.7/urlparse.pyR�zs.
	#

	

cCs`i}xSt|||||�D]9\}}||krK||j|�q|g||<qW|S(s2Parse a query given as a string argument.

        Arguments:

        qs: percent-encoded query string to be parsed

        keep_blank_values: flag indicating whether blank values in
            percent-encoded queries should be treated as blank strings.
            A true value indicates that blanks should be retained as
            blank strings.  The default false value indicates that
            blank values are to be ignored and treated as if they were
            not included.

        strict_parsing: flag indicating what to do with parsing errors.
            If false (the default), errors are silently ignored.
            If true, errors raise a ValueError exception.

        max_num_fields: int. If set, then throws a ValueError if there
            are more than n fields read by parse_qsl().
    (RR�(tqstkeep_blank_valueststrict_parsingtmax_num_fieldst	separatortdicttnametvalue((s /usr/lib64/python2.7/urlparse.pyR�st_QueryStringSeparatorWarningcBseZdZRS(s>Warning for using default `separator` in parse_qs or parse_qsl(R7R8R9(((s /usr/lib64/python2.7/urlparse.pyR��ss/etc/python/urllib.cfgcCsp|st|ttf�r8|dk	r8td��nt�}|dkr�t}d}|dkr�tjj	|�}d}n|dkryt
t�}Wntk
r�qX|�Bddl
}	|	j
�}
|
j|�|
j	d|�}|aWdQXt}n|dkrbd|krYddlm}|d	d
ddd
ddtdd�nd}q�|dkrw|}q�t|�dkr�tdj||�dd��q�n|dk	r||kr�d|jd�|jd�}nd|j|�}||krtd��qn||krbg|jd�D]"}
|
jd�D]}|^qJq7}n"g|j|�D]}
|
^qr}g}x�|D]�}|r�|r�q�n|jdd�}t|�dkr|r�td|f�n|r�|jd�qq�nt|d�s|r�t|djdd��}t|djdd��}|j||f�q�q�W|S(sParse a query given as a string argument.

    Arguments:

    qs: percent-encoded query string to be parsed

    keep_blank_values: flag indicating whether blank values in
        percent-encoded queries should be treated as blank strings.  A
        true value indicates that blanks should be retained as blank
        strings.  The default false value indicates that blank values
        are to be ignored and treated as if they were  not included.

    strict_parsing: flag indicating what to do with parsing errors. If
        false (the default), errors are silently ignored. If true,
        errors raise a ValueError exception.

    max_num_fields: int. If set, then throws a ValueError if there
        are more than n fields read by parse_qsl().

    Returns a list, as G-d intended.
    s*Separator must be of type string or bytes.tPYTHON_URLLIB_QS_SEPARATORsenvironment variablei����NRR@(twarns0The default separator of urlparse.parse_qsl and s1parse_qs was changed to '&' to avoid a web cache s"poisoning issue (CVE-2021-23336). s4By default, semicolons no longer act as query field sseparators. s3See https://access.redhat.com/articles/5860431 for s
more details.t
stacklevelit&tlegacyis{} (from {}) must contain s1 character, or "legacy". See s<https://access.redhat.com/articles/5860431 for more details.sMax number of fields exceededt=sbad query field: %rRit+t (RVR�tbytesR,R[tobjectt_default_qs_separatortostenvironRktopent_QS_SEPARATOR_CONFIG_FILENAMEtEnvironmentErrortConfigParsertreadfptwarningsR�R�RNtformattcountR+R�R�RY(R�R�R�R�R�t_legacytenvvar_namet
config_sourceRR�tconfigR�t
num_fieldsts1ts2tpairstrt
name_valuetnvR�R�((s /usr/lib64/python2.7/urlparse.pyR�st)		

			##;"
(1R9treR�t__all__RvRtRAtnon_hierarchicalt
uses_queryt
uses_fragmentRmRgR_RlR#R%R�R&tcollectionsR;R<R?tTrueRRBRTR^RaRRRRRRWt	NameErrorR�R�R�R�tcompileR�R�R,RtRuntimeWarningR�R�R�R(((s /usr/lib64/python2.7/urlparse.pyt<module>!sv	.""				=	
	4	

		
			

© 2025 UnknownSec
Web Design for Beginners | Anyleson - Learning Platform
INR (₹)
India Rupee
$
United States Dollar
Web Design for Beginners

Web Design for Beginners

in Design
Created by Linda Anderson
+2
5 Users are following this upcoming course
Course Published
This course was published already and you can check the main course
Course
Web Design for Beginners
in Design
4.25
1:45 Hours
8 Jul 2021
₹11.80

What you will learn?

Create any website layout you can imagine

Support any device size with Responsive (mobile-friendly) Design

Add tasteful animations and effects with CSS3

Course description

You can launch a new career in web development today by learning HTML & CSS. You don't need a computer science degree or expensive software. All you need is a computer, a bit of time, a lot of determination, and a teacher you trust. I've taught HTML and CSS to countless coworkers and held training sessions for fortune 100 companies. I am that teacher you can trust. 


Don't limit yourself by creating websites with some cheesy “site-builder" tool. This course teaches you how to take 100% control over your webpages by using the same concepts that every professional website is created with.


This course does not assume any prior experience. We start at square one and learn together bit by bit. By the end of the course you will have created (by hand) a website that looks great on phones, tablets, laptops, and desktops alike.


In the summer of 2020 the course has received a new section where we push our website live up onto the web using the free GitHub Pages service; this means you'll be able to share a link to what you've created with your friends, family, colleagues and the world!

Requirements

No prerequisite knowledge required

No special software required

Comments (0)

Report course

Please describe about the report short and clearly.

Share

Share course with your friends